First of all sorry if this question has been already asked somewhere, but after a few hours on google I still can't find an answer.
I am pretty new in portlet development, (but we have a shortage of developers and I have to work with it time to time), so the solution might be something trivial, but I really don't have enough experience with it.
The problem is I have two portlets on a page and I try to let one of them know about changes in the other. For this I use IPC. In the first one I have a Liferay.fire function:
function fire(key,value){
Liferay.fire(
'category',{
id: key,
name: value
}
);
}
In the other I have a Liferay.on('category',function(category){...}) function with an ajax call inside and some rendering methods.
Now if I visit the mentioned page and click on the corresponding buttons, at first everything works just fine. However, if I navigate from this page and come back, the listener will run two times. Navigating again -> three times. And so on... But if I reload the page (with F5 or CTRL+F5), it starts over, so until further navigation the listener runs only once.
The other strange thing is no matter how many times the function runs, the input parameters are all the same for each.
For example, if I have left the page and went back to it 3 times and last time I chose the category with 'id=1', then the function will run 3 times with 'id=1'. Now if I choose 'id=2' it will run 3 times with 'id=2'.
If anyone has any idea I would be really grateful as I am stuck for almost a day now.
Thank you very much in advance and please let me know if you need any further info.
the problem you're having is caused by the global Liferay.on listeners that are being created but never removed.
In Liferay Portal 7.x, SPA navigation is enabled by default. This means that when you are navigating, the page isn't being completely refreshed, but simply updated with new data coming from the server.
In a traditional navigation scenario, every page refresh resets everything, so you don't have to be so careful about everything that's left behind. In an SPA scenario, however, global listeners such as Liferay.on or Liferay.after or body delegates can become problematic. Every time you're executing that code, you're adding yet another listener to the globally persisted Liferay object. The result is the observed multiple invocations of those listeners.
To fix it, you simply need to listen to the navigation event in order to detach your listeners like this:
var onCategory = function(event) {...};
var clearPortletHandlers = function(event) {
if (event.portletId === '<%= portletDisplay.getRootPortletId() %>') {
Liferay.detach('onCategoryHandler', onCategory);
Liferay.detach('destroyPortlet', clearPortletHandlers);
}
};
Liferay.on('category', onCategory);
Liferay.on('destroyPortlet', clearPortletHandlers);
Related
I hope someone can help me solve a very serious problem we face at the moment with a business critical application losing data when a user works in it.
This happens randomly - I have never reproduced this but the users are in the system a lot more than me.
A document is created with a load of fields on it, and there are 2 rich text fields. We're using Domino 8.5.3 - there are no extension lib controls in use. The document has workflow built in, and all validation is done by a SSJS function called from the data query save event. There is an insane amount of logging to the sessionscope.log and also this is (now) captured for each user in a notes document so I can review what they are doing.
Sometimes, a user gets to a workflow step where they have to fill in a Rich Text field and make a choice in a dropdown field, then they submit the document with a workflow button. When the workflow button is pressed (does a Full Update) some client side JS runs first
// Process any autogenerated submit listeners
if( XSP._processListeners ){ // Not sure if this is valid in all versions of XPages
XSP._processListeners( XSP.querySubmitListeners, document.forms[0].id );
}
(I added this to try and prevent the RTF fields losing their values after reading a blog but so far it's not working)
then the Server-side event runs and calls view.save() to trigger QS code (for validation) and PS code to run the workflow agent on the server.
95% of the time, this works fine.
5% of the time however, the page refreshes all the changes made, both to the RFT field (CKEditor) and the dropdown field are reloaded as they were previously, with no content. It's like the save hasn't happened, and the Full Update button has decided to work like a page refresh instead of a submit.
Under normal circumstances, the log shows that when a workflow button is pressed, the QuerySave code starts and returns True. Then the ID of the workflow button pressed is logged (so I can see which ones are being used when I am reviewing problems), then the PostSave code starts and finally returns true.
When there is a problem, The QuerySave event runs, returns true if the validation has passed, or false if it's failed, and then it stops. The ID of the workflow button is also logged. But the code should continue by calling the PostSave function if the QuerySave returns true - it doesn't even log that it's starting the PostSave function.
And to make matters worse, after the failure to call the PostSave code, the next thing that is logged is the beforePageLoad event running and this apparently reloads the page, which hasn't got the recent edits on it, and so the users loses all the information they have typed!
This has to be the most annoying problem I've ever encountered with XPages as I can find no reason why a successful QuerySave (or even a failure because mandatory fields weren't filled in) would cause the page to refresh like this and lose the content. Please please can someone help point me in the right direction??
It sounds as if in the 5% use cases, the document open for > 30mins and the XSP session is timing out - the submit causes the component tree to be re-created, and the now empty page returned back to the user. Try increasing the time out for the application to see if the issue goes away.
I would design the flow slightly different. In JSF/XPages validation belongs into validators, not into a QuerySave event. Also I'd rather use a submit for the buttons, so you don't need to trigger a view.save() in code. This does not interfere with JSF's sequence of things - but that's style not necessarily source of your problem.... idea about that:
As Jeremy I would as a first stop suspect a timeout, then the next stop is a fatal issue in your QuerySave event, that derails the runtime (for whatever reason). You can try something like this:
var qsResult = false;
// your code goes here, no return statements
// please and if you are happy
qsResult = true;
return qsResult;
The pessimistic approach would eventually tell you if something is wrong. Also: if there is an abort and your querySave just returns, then you might run in this trap
function noReturn() {return; } //nothing comes back!
noReturn() == true; --> false
noReturn() == false; --> false
noReturn() != false; --> true!!!!
What you need to check: what is your performance setting: serialize to disk, keep in memory or keep latest in memory? It could be you running foul of the way JavaScript libraries work.
A SSJS library is loaded whenever it is needed. Variables inside are initialized. A library is unloaded when memory conditions require it and all related variables are discarded. so if you rely on any variable in a JS Function that sits inside a SSJS library between calls you might or might not get the value back, which could describe your error condition. Stuff you want to keep should go into a scope (viewScope seems right here).
To make it a little more trickier:
When you use closures and first class functions these functions have access to the variables from the parent function, unless the library had been unloaded. Also functions (you could park them in a scope too) don't serialize (open flaw) so you need to be careful when putting them into a scope.
If your stuff is really complex you might be better off with a backing bean.
Did that help?
To create a managed bean (or more) check Per's article. Your validator would sit in a application bean:
<faces-config>
<managed-bean>
<managed-bean-name>workflowvalidator</managed-bean-name>
<managed-bean-class>com.company.WfValidator</managed-bean-class>
<managed-bean-scope>application</managed-bean-scope>
</managed-bean>
</faces-config>
Inside you would use a map for the error messages
public Map<String,String> getErrorMessages() {
if (this.errorStrings == null) { // errorStrings implements the MAP interface
this.loadErrorDefinitions(); //Private method, loads from Domino
}
return this.errorStrings;
}
then you can use EL in the Error message string of your validators:
workflowvalidator.errorMessage("some-id");
this allows XPages to pick the right one directly in EL, which is faster than SSJS. You could then go and implement your own custom Java validator that talks to that bean (this would allow you bypass SSJS here). Other than the example I wouldn't put the notes code in it, but talk to your WfValidator class. To do that you need to get a handle to it in Java:
private WfValidator getValidatorBean() {
FacesContext fc = FacesContext.getCurrentInstance();
return (WfValidator) fc.getApplication()
.getVariableResolver()
.resolveVariable(fc, "workflowvalidator");
}
Using the resolver you get access to the loaded bean. Hope that helps!
My experience is that this problem is due to keeping page in memory. Sometimes for some reason the page gets wiped out of memory. I'm seeing this when there is a lot of partial refreshes with rather complex backend Java processing. This processing somehow seems to take the space from memory that is used by the XPage.
The problem might have been fixed in later releases but I'm seeing it at least in 8.5.2.
In your case I would figure out some other workaround for the CKEditor bug and use "Keep pages on disk" option. Or if you can upgrade to 9.0.1 it might fix both problems.
My extension runs on an existing web page I do not control. I want to have an options page for it. What I haven’t figured out is how to get the option values from the injected code. localStorage isn’t shared, of course. I’ve tried using sendRequest / addListener in both directions, although it would be much preferable to push values from the options page to the injected code than they other way ‘round.
At the beginning, I simply put the option checkboxes on the manipulated page (the one the code is injected into), and those checkboxes set values in localStorage:
localStorage.showStuff = !!$(evt.target).attr(‘checked’);
Then I check those values in the code:
if (localStorage.showStuff == ‘true’) { … }
I moved the checkbox code to the options page and had it do a sendRequest when the options changed, and had my injected code have a listener for the message, but it doesn’t get the messages (my background page does, but that doesn’t help me). I also tried having the injected code hand a callback to the options page, but the sendResponse object only seems to work for the duration of the notify handler (not surprising, but I had to give it a try).
Right now my manifest’s permissions lists the foreign page ("http://example.com/*") and “tab”.
The one thing I know I can do is asynchronously query the options page via a callback, but the code doesn’t (and really can’t) work asynchronously without serious rewriting.
Any and all ideas welcome, thanks in advance.
i'm new to chrome extensions but when i tried to write/read localstorage from both, background script and option page it worked perfectly.
i haven't tried native localstorage but chrome's storage api.
take a look at this
code A: (set)
chrome.storage.sync.set({'key':'qwe'});
code B: (get)
chrome.storage.sync.get('key', function(response) {
console.log(response); // 'qwe'
});
u could put either code A in the background and code B in the option page or the other way around.
they are using the same storage.
this works for me. i hope u'll get there too.
The thing to remember is that only the background page is long-lived. The rest of the pieces of your chrome extension are transient (content scripts for the duration of the site navigation, options pages only while open, etc).
So you have to use messaging and save things using the background page. However, get ready for the storage API which should be landing soon. This will make things a lot easier for you!
Check it out here.
I am using jquery in my web page. I see a lot of memory leak happening and after a while the whole browser grinds to a halt. I used the sieve tool and noticed that there is a contsant increase in the no. of DOM elements, everytime by a no. of 4.
Am I doing something wrong in the way I have associated events?
Or is it because I am using setTimeout to redraw my app every X seconds?
Event association:
$('.bir_Names').click(showNames);
The selector $('.bir_Names') evelautes to a set of some 300 elements each of which call the function on click.
setTimeout
Every X mins I remove every single HTML element in the app and rebind fresh data and associate the events. I use jquery remove() to delete elements. have tried innerHTML = '' and empty() also.
I see a jump of nearly 30-40 MB for every redraw and Sieve indicates that none of the deleted nodes are actually removed.
Anyhelp would be greatly appreciated. This thing is driving me nuts.
Thanks.
You don't mention which browser, but some googling seems to indicate that this is a known problem with IE. Here's one potential workaround:
http://forum.jquery.com/topic/possible-memory-leak-in-remove-and-empty
Note that that's referring to a 1.2.x release of jQuery. Before you do anything, make sure you are running the latest 1.6.x release to first see if the defect has already been fixed within jQuery.
[EDIT] ack...you DO state your browsers...'all'...so maybe disregard that first link.
I have this requirement:
The system will record the length of time the user displayed each page.
While trivial in a rich-client app, I have no idea how people usually go about tracking this.
Edited: By John Hartsock
I have always been curious about this and It seems to me that this could be possible with the use of document.onunload events, to accurately caputure star and stop times for all pages. Basically as long as a user stays on your site you will always be able to get the start and stop time for each page except the last one. Here is the scenario.
User enters your site. -- I have a
start time for the home page
User goes to page 2 of your site -- I have a stop time for the home page and a start time for page 2
User exits your site. -- How do you get the final stop time for page 2
The question becomes is it possible to track when a user closes the window or navigates away from your site? Would it be possible to use the onunload events? If not, then what are some other possibilities? Clearly AJAX would be one route, but what are some other routes?
I don't think you can capture every single page viewing, but I think you might be able to capture enough information to be worthwhile for analysis of website usage.
Create a database table with columns for: web page name, user name, start time, and end time.
On page load, INSERT a record into the table containing data for the first three fields. Return the ID of that record for future use.
On any navigation, UPDATE the record in the navigation event handler, using the ID returned earlier.
You will end up with a lot more records with start times than records with both start and end time. But, you can do these analyses from this simple data:
You can count the number of visits to each page by counting start times.
You can calculate the length of time the user displayed each page for the records that have both start and end time.
If you have other information about users, such as roles or locations, you can do more analysis of page viewing. For example, if you know roles, you can see which roles use which pages the most.
It is possible that your data will be distorted by the fact that some pages are abandoned more often than others.
However, you certainly can try to capture this data and see how reasonable it appears. Sometimes in the real world, we have to make due with less than perfect information. But that may be enough.
Edit: Either of these approaches might meet your needs.
1) Here's the HTML portion of an Ajax solution. It's from this page, which has PHP code for recording the information in a text file -- easy enough to change to writing to a database if you wish.
<html>
<head>
<title>Duration Logging Demo</title>
<script type=”text/javascript”>
var oRequest;
var tstart = new Date();
// ooooo, ajax. ooooooo …
if(window.XMLHttpRequest)
oRequest = new XMLHttpRequest();
else if(window.ActiveXObject)
oRequest = new ActiveXObject(“Microsoft.XMLHTTP”);
function sendAReq(sendStr)
// a generic function to send away any data to the server
// specifically ‘logtimefile.php’ in this case
{
oRequest.open(“POST”, “logtimefile.php”, true); //this is where the stuff is going
oRequest.setRequestHeader(“Content-Type”, “application/x-www-form-urlencoded”);
oRequest.send(sendStr);
}
function calcTime()
{
var tend = new Date();
var totTime = (tend.getTime() – tstart.getTime())/1000;
msg = “[URL:" location.href "] Time Spent: ” totTime ” seconds”;
sendAReq(‘tmsg=’ msg);
}
</script>
</head>
<body onbeforeunload=”javascript:calcTime();”>
Hi, navigate away from this page or Refresh this page to find the time you spent seeing
this page in a log file in the server.
</body>
</html>
2) Another fellow proposes creating a timer in Page_Load. Write the initial database record at that point. Then, on the timer's Elapsed event, do an update of that record. Do a final update in onbeforeunload. Then, if for some reason you miss the very last onbeforeunload event, at least you will have recorded most of the time the user spent on the page (depending upon the timer Interval). Of course, this solution will be relatively resource-intensive if you update every second and have hundreds or thousands of concurrent users. So, you could make it configurable that this feature be turned on and off for the application.
This has to be done with some javascript. As the other said, it is not completely reliable. But you should be able to get more than enough accurate data.
This will need to call your server from javascript code when the page is unloaded. The javascript event to hook is window.unload. Or you can use a nicer API, like jQuery. Or you could use a ready made solution, like WebTrends, or Google Analytics. I think that both record the length of time that the page was displayed.
Good web analytics is pretty hard. And it becomes harder if you have to manage a lot of traffic. You should try to find an existing solution and not reinvent your own ...
I've put some work into a small JavaScript library that times how long a user is on a web page. It has the added benefit of more accurately (not perfectly, though) tracking how long a user is actually interacting with the page. It ignores time that a user switches to different tabs, goes idle, minimizes the browser, etc. The Google Analytics method suggested has the shortcoming (as I understand it) that it only checks when a new request is handled by your domain. It compares the previous request time against the new request time, and calls that the 'time spent on your web page'. It doesn't actually know if someone is viewing your page, has minimized the browser, has switched tabs to 3 different web pages since last loading your page, etc.
As multiple others have mentioned, no solution is perfect. But hopefully this one provides value, too.
Edit: I have updated the example to include the current API usage.
http://timemejs.com
An example of its usage:
Include in your page:
<script src="http://timemejs.com/timeme.min.js"></script>
<script type="text/javascript">
TimeMe.initialize({
currentPageName: "home-page", // page name
idleTimeoutInSeconds: 15 // time before user considered idle
});
</script>
If you want to report the times yourself to your backend:
xmlhttp=new XMLHttpRequest();
xmlhttp.open("POST","ENTER_URL_HERE",true);
xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
var timeSpentOnPage = TimeMe.getTimeOnCurrentPageInSeconds();
xmlhttp.send(timeSpentOnPage);
TimeMe.js also supports sending timing data via websockets, so you don't have to try to force a full http request into the document.onbeforeunload event.
In a web based system, there's no way to reliably do this. Sure, you can record each page that a user displays and record the length of time between each view but what happens when they close the browser on the last page they're displaying on? That's just one of dozens of problems with this requirement.
What about an AJAX based approach? It would only work when Javascript is on the client side, but sending a POST to some script every 15 seconds will get you a reasonable amount of granularity.
There are also more complicated "reverse-AJAX" things you might be able to do... but I don't know much about them.
You can use onunload to do what you need. Have it send a AJAX request to your server to update a database. You may want to return false then do document.close once the AJAX request has completed such that it won't quit prematurely and the ajax won't get discarded.
In the database you'll just want to store the page, the ip address, the time of the event, and whether it was a onload or onunload event.
That is all there is too it.
I recently made a example of recording html page spent time.
refresh would not interrupt the recording, and close would
I use sessionStorage to sotre "time" that page spent
if refresh
I would put it in to sessionStorage
if close
I can not get it from sessionSotrage , so I set time=0
here is my code
`
<body>
time spent :<div id="txt"></div>
</body>
<script>
$(function () {
statisticsStay();
})
function statisticsStay(){
var second=0;
if(sessionStorage.getItem('testSecond')!=null)
second=sessionStorage.getItem('testSecond');
var timer = setInterval(function(){
second++;
document.getElementById('txt').innerHTML=second;
},1000);
window.onbeforeunload = function(){
sessionStorage.setItem('testSecond',second);
};
}
</script>
`
Inside my "dom ready" function, I create a TabView on an HTML element and call tabview.getTab(0).blah(). Unfortunately every now and then I get an error that tabView.get("tabs") returned null in my javascript console (firefox).
YAHOO.util.Event.onDOMReady(function() {
tabview = new YAHOO.widget.TabView("content");
var tab0 = tabview.getTab(0);
...
tabview.getTab(0) is implemented as tabs.get("tabs")[0].
This happens sometimes but not every time. Does anybody have an explanation for why this happens sometimes? The DOMReady event occurs after the entire DOM is in place but before anything is displayed, right?
Speaking of which, sometimes I see flashing of data in some of the other tabs. That does not bode well I think for the nice, clean experience I was hoping for.
This is YUI 2.7.0/
OK - I believe the answer is, I was trying to use prototype and YUI at the same time. In theory I think that is possible but you need to pick one or the other when it comes to doing things on the "dom:loaded"/onDOMReady events, if you know what I mean.
So I don't know what was happening, but it was some sort of race, and once I picked a single mechanism for doing things when the dom was ready, everything is working fine.