Xpages error with accessing sessionScope var - xpages

I have a strange and intermittent error with my Xpages app.
I am extensively using sessionScope variables for various housekeeping items in my app, like navigation and layout titles.
From time to time my app blows up because one of my sessionScope values is null - but it should not be null.
The code blows up on an include page in my main Xpage:
What I am trying to accomplish is dynamically load a page in the facet_1 of an appLayout.
<xp:include id="include1" xp:key="LeftColum">
<xp:this.pageName><![CDATA[${javascript:var selPage:String = sessionScope.selectedPage;
var pageType:String = sessionScope.pageType[selPage];
if (pageType == "0")
{navKey = "ccAppNav"}
if (pageType == "9")
{var capSelPage = selPage.charAt(0).toUpperCase() + selPage.substring(1);
navKey = "ccAppNav" + capSelPage}
navKey + ".xsp"}]]></xp:this.pageName>
</xp:include>
It works perfectly for awhile, but then randomly blows up as the value session.selectedPage is null. I am almost certain I am not setting the value to null anywhere in the code.
Oddly, I am also seeing this error near the null error.
"Xpage State data not available for /xpApp because no control tree was found in the cache."
That sounds to me like Xpages has flushed the sessionScope values.

Ensure Session Timeout and Application Timeout in xsp.properties is greater than HTTP timeout. Otherwise the user will not be prompted to log in again, but the sessionScopes and applicationScopes could get removed.
Also, the same can happen if someone else refreshes design of the application or cleans the application.

Related

Random failure of selenium test on test server

I'm working on a project which uses nodejs and nighwatch for test automation. The problem here is that the tests are not reliable and give lots of false positives. I did everything to make them stable and still getting the errors. I went through some blogs like https://bocoup.com/blog/a-day-at-the-races and did some code refactoring. Did anyone have some suggestions to solve this issue. At this moment I have two options, either I rewrite the code in Java(removing nodejs and nightwatch from solution as I'm far more comfortable in Java then Javascript. Most of the time, struggle with the non blocking nature of Javascript) or taking snapshots/reviewing app logs/run one test at a time.
Test environment :-
Server -Linux
Display - Framebuffer
Total VM's -9 with selenium nodes running the tests in parallel.
Browser - Chrome
Type of errors which I get is element not found. Most of the time the tests fail as soon the page is loaded. I have already set 80 seconds for timeout so time can't be issue. The tests are running in parallel but on separate VM's so I don't know whether it can be issue or not.
Edit 1: -
Was working on this to know the root cause. I did following things to eliminate random fails: -
a. Added --suiteRetries to retry the failed cases.
b. Went through the error screenshot and DOM source. Everything seems fine.
c. Replaced the browser.pause with explicit waits
Also while debugging I observed one problem, maybe that is the issue which is causing random failures. Here's the code snippet
for (var i = 0; i < apiResponse.data.length; i++) {
var name = apiResponse.data[i];
browser.useXpath().waitForElementVisible(pageObject.getDynamicElement("#topicTextLabel", name.trim()), 5000, false);
browser.useCss().assert.containsText(
pageObject.getDynamicElement("#topicText", i + 1),
name.trim(),
util.format(issueCats.WRONG_DATA)
);
}
I added the xpath check to validate if i'm waiting enough for that text to appear. I observed that visible assertion is getting passed but in next assertion the #topicText is coming as previous value or null.This is an intermittent issue but on test server happens frequently.
There is no magic bullet to brittle UI end to end tests. In the ideal world there would be an option set avoid_random_failures=true that would quickly and easily solve the problem, but for now it's only a dream.
Simple rewriting all tests in Java will not solve the problem, but if you feel better in java, then I would definitely go in that direction.
As you already know from this article Avoiding random failures in Selenium UI tests there are 3 commonly used avoidance techniques for race conditions in UI tests:
using constant sleep
using WebDriver's "implicit wait" parameter
using explicit waits (WebDriverWait + ExpectedConditions + FluentWait)
These techniques are also briefly mentioned on WebDriver: Advanced Usage, you can also read about them here: Tips to Avoid Brittle UI Tests
Methods 1 and 2 are generally not recommended, they have drawbaks, they can work well on simple HTML pages, but they are not 100% realiable on AJAX pages, and they slow down the tests. The best one is #3 - explicit waits.
In order to use technique #3 (explicit waits) You need to familiarize yourself and be comfortable with the following WebDriver tools (I point to theirs java versions, but they have their counterparts in other languages):
WebDriverWait class
ExpectedConditions class
FluentWait - used very rarely, but very usefull in some difficult cases
ExpectedConditions has many predefinied wait states, the most used (in my experience) is ExpectedConditions#elementToBeClickable which waits until an element is visible and enabled such that you can click it.
How to use it - an example: say you open a page with a form which contains several fields to which you want to enter data. Usually it is enought to wait until the first field appears on the page and it will be editable (clickable):
By field1 = By.xpath("//div//input[.......]");
By field2 = By.id("some_id");
By field3 = By.name("some_name");
By buttonOk = By.xpath("//input[ text() = 'OK' ]");
....
....
WebDriwerWait wait = new WebDriverWait( driver, 60 ); // wait max 60 seconds
// wait max 60 seconds until element is visible and enabled such that you can click it
// if you can click it, that means it is editable
wait.until( ExpectedConditions.elementToBeClickable( field1 ) ).sendKeys("some data" );
driver.findElement( field2 ).sendKeys( "other data" );
driver.findElement( field3 ).sendKeys( "name" );
....
wait.until( ExpectedConditions.elementToBeClickable( buttonOK)).click();
The above code waits until field1 becomes editable after the page is loaded and rendered - but no longer, exactly as long as it is neccesarry. If the element will not be visible and editable after 60 seconds, then test will fail with TimeoutException.
Usually it's only necessary to wait for the first field on the page, if it becomes active, then the others also will be.

Does capybara "within" deal with ajax, why failed within the css with the error " unable to find find css "

I have a step like this:
Then(/^I can see the Eligible Bugs list "([^"]*)"$/) do |bugs|
bugs_list = bugs.split(", ")
assert page.has_css?(".eligible-bugs", :visible => true)
within(".eligible-bugs") do
bugs_list.each do |bug|
assert page.has_content?(bug)
end
end
end
But the step fail sometimes at the " within(".eligible-bugs") do" with the error 'Unable to find css ".eligible-bugs"'
I feel it is odd.
for the assertion has been passed. it means the css is visible.
why within cannot find css? How it happen.
But the step fail sometimes at the " within(".eligible-bugs") do" with the error 'Unable to find css ".eligible-bugs"'
I feel it is odd.
for the assertion has been passed. it means the css is visible.
why within cannot find css? How it happen.
BTW, I have set my max wait time to 5.
Capybara.default_max_wait_time = 5
The only way that should happen, is if the page is dynamically changing while you're running the test - sometime during your check for all bugs content on the page it is changing and the '.eligible-bugs' element is going way. The test and the browser run separately, so how/why it is happening depends on what else your page is doing in the browser, it would also depend on what steps have come before this in the test.
Also, note that it's not necessarily disappearing between the has_css? and the within statement first running. If it disappears at any point during the code inside the within running it could throw the same error as it attempts to reload the '.eligible-bugs' element.
From the title of the question I assume the list that you want to check is the result of a search or filtering action. If it is, does that action remove the existing '.eligible-bugs' element and then after some time replace it with a new one returned from an ajax request or something? If that is the case then, since you control the test data, you should be waiting for the correct results count to show, thereby ensuring any element replacements have completed, before checking for the text. How you do that would depend on the exact makeup of the page, but if each eligible bug was a child of '.eligible-bugs' and had a class of '.eligible-bug' I would write your test something like
Then(/^I can see the Eligible Bugs list "([^"]*)"$/) do |bugs|
bugs_list = bugs.split(", ")
assert_css(".eligible-bugs > .eligible_bug", count: bugs_list.size) # wait until the expected number of results have shown up
within(".eligible-bugs") do
bugs_list.each do |bug|
assert_content(bug)
end
end
end

Reducing NUMBER_OF_VIEWS_IN_SESSION results in ViewExpiredException

I'm running myFaces 2.1.7 and desperatly trying to reduche the mmeory usage of our web application. Basically session size of each users ballons to up to 10MB which we can't handle.
I tried adding the following to Descriptor:
<context-param>
<param-name>org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION</param-name>
<param-value>3</param-value>
</context-param>
result was ok and the session size of any given user wasn't going higher than 1MB. But then alot of users aren't able to login since the change. What is happening is that on login screen ViewExpiredException is thrown , and i wrote a custom viewExpiredException class that redirects the user back to login screen, so essentially they're stuck on login screen loop
try to log in ---> ViewExpiredException thrown ---> Custom ViewExpiredHandler ----> Forward user to Login Screen
<----------------------------------------------------------------------------------------------------------------
Removing the above context-param fixes the issue! My question is one -
1) why is the viewException is thrown when the NUMBER_OF_VIEWS_IN_SESSION is reduced from its default value
2) is it possible to work around the issue by using the custom ViewExpiredHandler class ? how?
3) am i doing something wrong in causing this issue?
P.S. in testing why this is happening, if i open for example IE and try to login all is ok, but then if i try to open another browser (e.g. chrome) and login using another username above issues are encountered mirroring issues reported by users not being able to login.
Also following hints of myFaces wiki i have added the following to descriptor but i don't believe it can contribute to the issue:
<context-param>
<param-name>org.apache.myfaces.SERIALIZE_STATE_IN_SESSION</param-name>
<param-value>false</param-value>
</context-param>
Update:
As suggested by BalusC i change the STATE_SAVING_METHOD to client in order to resolve the memory issues we are having . immediately all UAT testers reported dramatic slow down on all page load speeds so i had to revert back to having STATE_SAVING_METHOD set to server.
Since I've been experimenting with value of org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION and with the current value of 6 (an improvement of 70% over the default 20 sessions) i am not getting the ViewExpiredException errors any more.(reason why this question was created originally)
But in a way i am playing russian roulette. I don't really know why having the value of org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION set to 3/4/5 didn't work and why 6 is working and it bothers me not being able to explain it. Is anyone able to provide information?
I originally asked this question after noticing how the use of browser's BACK button was causing ViewExpired Exception to be thrown.
I had also reduced the default value of 20 for org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION to 3 and then 6 (in trying memory foot print) which aggravated the problem.
Having wrote a ViewExpired Exception Handler, i was now looking for a way to do something about the Exception when it was thrown and below is how i am dealing with it:
1 - `org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION ` value is 6
2 - when ViewExpired Exception is thrown in ViewExpired Handler retrieve the request URL
HttpServletRequest orgRequest = (HttpServletRequest) fc.getExternalContext().getRequest();
3 - insert a boolean into user session (used as flag to be used below)
Map<String, Object> sessionMap = FacesContext.getCurrentInstance()
.getExternalContext().getSessionMap();
sessionMap.put("refreshedMaxSesion",true);
4 - redirect back to URL retrieved in step 2
fc.getExternalContext().redirect(orgRequest.getRequestURI());
5 - Since page is reloaded , see if the flag in step 3 exists and if so flag a notification to be shown to user and remove from session
if(sessionMap.containsKey("refreshedMaxSesion")){
showPageForceRefreshed = true;
sessionMap.remove("refreshedMaxSesion");
}
6 - I'm using Pines Notify to show subtle "FYI, this page way refreshed" type message".
<h:outputText rendered="#{ myBean.showPageForceRefreshed }" id="PageWasRefreshedInfoId" >
<script type="text/javascript">
newjq(document).ready(function(){
newjq.pnotify({
pnotify_title: 'Info',
pnotify_text: 'Ehem, Page was Refreshed.' ,
pnotify_mouse_reset: false,
pnotify_type: 'info',
pnotify_delay: 5000
});
});
</script>
</h:outputText>

Request.Filter in an IIS Managed Module

My goal is to create an IIS Managed Module that looks at the Request and filters out content from the POST (XSS attacks, SQL injection, etc).
I'm hung up right now, however, on the process of actually filtering the Request. Here's what I've got so far:
In the Module's Init, I set HttpApplication.BeginRequest to a local event handler. In that event handler, I have the following lines set up:
if (application.Context.Request.HttpMethod == "POST")
{
application.Context.Request.Filter = new HttpRequestFilter(application.Context.Request.Filter);
}
I also set up an HttpResponseFilter on the application.Context.Response.Filter
HttpRequestFilter and HttpResponseFilter are implementations of Stream.
In the response filter, I have the following set up (an override of Stream.Write):
public override void Write(byte[] buffer, int offset, int count)
{
var Content = UTF8Encoding.UTF8.GetString(buffer);
Content = ResponseFilter.Filter(Content);
_responseStream.Write(UTF8Encoding.UTF8.GetBytes(Content), offset, UTF8Encoding.UTF8.GetByteCount(Content));
}
ResponseFilter.Filter is a simple String.Replace, and it does, in fact, replace text correctly.
In the request filter, however, there are 2 issues.
The code I have currently in the RequestFilter (an override of Stream.Read):
public override int Read(byte[] buffer, int offset, int count)
{
var Content = UTF8Encoding.UTF8.GetString(buffer);
Content = RequestFilter.Filter(Content);
if (buffer[0]!= 0)
{
return _requestStream.Read(UTF8Encoding.UTF8.GetBytes(Content), offset, UTF8Encoding.UTF8.GetByteCount(Content));
}
return _requestStream.Read(buffer, offset, count);
}
There are 2 issues with this. First, the filter is called twice, not once, and one of the requests is just basically a stream of /0's. (the if check on buffer[0] filters this currently, but I think that I'm setting something up wrong)
Second, even though I am correctly grabbing content with the .GetString in the read, and then altering it in RequestFilter.Filter(a glorified string.replace()), when I return the byte encoded Content inside the if statement, the input is unmodified.
Here's what I'm trying to figure out:
1) Is there something I can check prior to the filter to ensure that what I'm checking is only the POST and not the other time it is being called? Am I not setting the Application.Context.Request.Filter up correctly?
2) I'm really confused as to why rewriting things to the _requestStream (the HttpApplication.Context.Request.Filter that I sent to the class) isn't showing up. Any input as to something I'm doing wrong would be really appreciated.
Also, is there any difference between HttpApplication.Request and HttpApplication.Context.Request?
edit: for more information, I'm testing this on a simple .aspx page that has a text box, a button and a label, and on button click assigns the text box text to the label's text. Ideally, if I put content in the textbox that should be filtered, it is my understanding that by intercepting and rewriting the post, I can cause the stuff to hit the server as modified. I've run test though with breakpoints in the module and in code, and the module completes before the code behind on the .aspx page is hit. The .aspx page gets the values as passed from the form, and ignores any filtering I attempted to do.
There's a few issues going on here, but for future reference, what explains the page receiving the unfiltered post, as well as the filter being evaluated twice is that you are likely accessing the request object in some way PRIOR to you setting the Request.Filter. This may cause it to evaluate the inputstream, running the currently set filter chain as is, and returning that stream.
For example, simply accessing Request.Form["something"] would cause it to evaluate the inputstream, running the entire filter chain, at that point in time. Any modification to the Request.Filters after this point in time would have no effect, and would appear that this filter is being ignored.
What you wanted to do is possible, but also ASP.NET provides Request Validation to address some of these issues (XSS). However, Sql Injection is usually averted by never constructing queries through string concatenation, not via input sanitizing, though defense-in-depth is usually a good idea.

Zing Chart Memory Leak, causing browser to crash

We are currently rendering 50-100 canvas on browser window. Browsers both IE and chrome crashes.
On further investigation, looks like memory is creeping up steadily. Causing browser to crash.
We are building a solution to print charts To achieve this,
We are displaying all the charts in a simple page (iframe) charts are not visible to user
Using chart id to getting image data.
Since charts are not visible we can ‘destroy’ or remove them from memory once they are rendered.
But ‘destroy’ does not reduce charts memory footprint
Tried setting object to null. this did not work either
Attached snippet for your reference,
var runner = 0
zingchart.complete = function (dataObj) {
for (i = 0; i < ZingChartCollection.length; i++) {
if (dataObj["id"] == ZingChartCollection[i].ChartId) {
var data = zingchart.exec(dataObj["id"], "getimagedata", '{"filetype": "png"}');
zingchart.exec(dataObj["id"], 'destroy');
zingchart.exec();
if (runner < 200) {
document.getElementById("displayCount").value = runner;
render();
}
else {
//zingchart = null;
}
runner++;
}
}
}
Any suggestions would be great.
Here's a note regarding the issue from the ZinChart dev team:
The issue here is that the render() -> complete event -> image generation -> destroy() is a closed loop.
The way the ZingChart lib works, in order to fire the complete as soon as possible, the code was binding the context menu events AFTER the complete was fired.
So, since destroy was being called immediately, the context menu events were left out in the open, and with 50-100 charts it starts to add, leading to memory leaks.
This will be changed & fixed in the next versions in order to fire the complete event after the context menu setup, however, with the current setup, there are two options:
use mode:static on render() since the idea is to get the static image out of the chart. This skips event binding so memory leak will no longer be an issue.
Also, since less canvas objects will be used, this will dramatically decrease the memory needed per chart.
if you need the complete functionality of the charts (although not needed in this particular case), call the destroy() in a delayed function via setTimeout. This will allow
for the context menu events to be set, so destroy() will also unbind them.

Resources