I'm running myFaces 2.1.7 and desperatly trying to reduche the mmeory usage of our web application. Basically session size of each users ballons to up to 10MB which we can't handle.
I tried adding the following to Descriptor:
<context-param>
<param-name>org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION</param-name>
<param-value>3</param-value>
</context-param>
result was ok and the session size of any given user wasn't going higher than 1MB. But then alot of users aren't able to login since the change. What is happening is that on login screen ViewExpiredException is thrown , and i wrote a custom viewExpiredException class that redirects the user back to login screen, so essentially they're stuck on login screen loop
try to log in ---> ViewExpiredException thrown ---> Custom ViewExpiredHandler ----> Forward user to Login Screen
<----------------------------------------------------------------------------------------------------------------
Removing the above context-param fixes the issue! My question is one -
1) why is the viewException is thrown when the NUMBER_OF_VIEWS_IN_SESSION is reduced from its default value
2) is it possible to work around the issue by using the custom ViewExpiredHandler class ? how?
3) am i doing something wrong in causing this issue?
P.S. in testing why this is happening, if i open for example IE and try to login all is ok, but then if i try to open another browser (e.g. chrome) and login using another username above issues are encountered mirroring issues reported by users not being able to login.
Also following hints of myFaces wiki i have added the following to descriptor but i don't believe it can contribute to the issue:
<context-param>
<param-name>org.apache.myfaces.SERIALIZE_STATE_IN_SESSION</param-name>
<param-value>false</param-value>
</context-param>
Update:
As suggested by BalusC i change the STATE_SAVING_METHOD to client in order to resolve the memory issues we are having . immediately all UAT testers reported dramatic slow down on all page load speeds so i had to revert back to having STATE_SAVING_METHOD set to server.
Since I've been experimenting with value of org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION and with the current value of 6 (an improvement of 70% over the default 20 sessions) i am not getting the ViewExpiredException errors any more.(reason why this question was created originally)
But in a way i am playing russian roulette. I don't really know why having the value of org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION set to 3/4/5 didn't work and why 6 is working and it bothers me not being able to explain it. Is anyone able to provide information?
I originally asked this question after noticing how the use of browser's BACK button was causing ViewExpired Exception to be thrown.
I had also reduced the default value of 20 for org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION to 3 and then 6 (in trying memory foot print) which aggravated the problem.
Having wrote a ViewExpired Exception Handler, i was now looking for a way to do something about the Exception when it was thrown and below is how i am dealing with it:
1 - `org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION ` value is 6
2 - when ViewExpired Exception is thrown in ViewExpired Handler retrieve the request URL
HttpServletRequest orgRequest = (HttpServletRequest) fc.getExternalContext().getRequest();
3 - insert a boolean into user session (used as flag to be used below)
Map<String, Object> sessionMap = FacesContext.getCurrentInstance()
.getExternalContext().getSessionMap();
sessionMap.put("refreshedMaxSesion",true);
4 - redirect back to URL retrieved in step 2
fc.getExternalContext().redirect(orgRequest.getRequestURI());
5 - Since page is reloaded , see if the flag in step 3 exists and if so flag a notification to be shown to user and remove from session
if(sessionMap.containsKey("refreshedMaxSesion")){
showPageForceRefreshed = true;
sessionMap.remove("refreshedMaxSesion");
}
6 - I'm using Pines Notify to show subtle "FYI, this page way refreshed" type message".
<h:outputText rendered="#{ myBean.showPageForceRefreshed }" id="PageWasRefreshedInfoId" >
<script type="text/javascript">
newjq(document).ready(function(){
newjq.pnotify({
pnotify_title: 'Info',
pnotify_text: 'Ehem, Page was Refreshed.' ,
pnotify_mouse_reset: false,
pnotify_type: 'info',
pnotify_delay: 5000
});
});
</script>
</h:outputText>
Related
I am using Primefaces to provide file download action in JSF. The code I am using is:
<p:commandButton id="xlsExport" value="Export XLS"
ajax="false"
onclick="PrimeFaces.monitorDownload(startPleaseWaitMonitor, stopPleaseWaitMonitor);">
<p:fileDownload value="#{SampleBean.XLSExport}" />
</p:commandButton>
SampleBean has the following method:
public StreamedContent getXLSExport() {
...
byte[] content = generator.generateXLS();
return new DefaultStreamedContent(new ByteArrayInputStream(content), "application/vnd.ms-excel", fileName, "UTF-8");
}
I am using it on two application servers - JBoss and Websphere. In case of Websphere I see warning in server logs when I do export:
000000f5 SRTServletRes W
com.ibm.ws.webcontainer.srt.SRTServletResponse setStatus WARNING:
Cannot set status. Response already committed.
When I run similar method but for CSV export there is no warning. For JBoss there is no warning too.
What could be the reason for such log warning?
I've recreated this locally - it appears that PrimeFaces' FileDownloadActionListener is attempting to set the response status code after that response has already been committed by the server. The FileDownload code grabs the response output stream, writes the entire contents of the downloaded file to it, and then attempts to update the response status code.
WebSphere commits and flushes a response when the amount of data passed into the response buffer exceeds a certain threshold (by default 32K.) Once the response has been committed, its headers (eg. status code) can't be updated. The other application servers probably behave the same way here - they just might not log a warning message. In this particular case the warning isn't anything to worry about, since the FileDownload code was just attempting to update the status code from 200 -> 200.
Using different content types (like CSV) shouldn't make a difference here. File size does make a difference - if a file is downloaded that's less than the response buffer size, then the response won't be committed before the PrimeFaces code tries to set its status.
A simple fix for this warning message would be to check to see if the response is committed before attempting to change its status. I've opened a PrimeFaces issue for this: https://github.com/primefaces/primefaces/issues/3955
Update: I provided a fix to PrimeFaces, so you shouldn't see this anymore in the nightly builds/next version.
I have a strange and intermittent error with my Xpages app.
I am extensively using sessionScope variables for various housekeeping items in my app, like navigation and layout titles.
From time to time my app blows up because one of my sessionScope values is null - but it should not be null.
The code blows up on an include page in my main Xpage:
What I am trying to accomplish is dynamically load a page in the facet_1 of an appLayout.
<xp:include id="include1" xp:key="LeftColum">
<xp:this.pageName><![CDATA[${javascript:var selPage:String = sessionScope.selectedPage;
var pageType:String = sessionScope.pageType[selPage];
if (pageType == "0")
{navKey = "ccAppNav"}
if (pageType == "9")
{var capSelPage = selPage.charAt(0).toUpperCase() + selPage.substring(1);
navKey = "ccAppNav" + capSelPage}
navKey + ".xsp"}]]></xp:this.pageName>
</xp:include>
It works perfectly for awhile, but then randomly blows up as the value session.selectedPage is null. I am almost certain I am not setting the value to null anywhere in the code.
Oddly, I am also seeing this error near the null error.
"Xpage State data not available for /xpApp because no control tree was found in the cache."
That sounds to me like Xpages has flushed the sessionScope values.
Ensure Session Timeout and Application Timeout in xsp.properties is greater than HTTP timeout. Otherwise the user will not be prompted to log in again, but the sessionScopes and applicationScopes could get removed.
Also, the same can happen if someone else refreshes design of the application or cleans the application.
I am trying to implement Paul Calhoun's Apache FOP solution for creating PDF's from Xpages (from Notes In 9 #102). I am getting the following java exception when trying to run the xAgent that does the processing --> Can't get a Writer while an OutputStream is already in use
The only changes that I have done from Paul's code was to change the package name. I have isolated when the exception happens to the SSJS line: var jce: DominoXMLFO2PDF = new DominoXMLFO2PDF(); All that line does is instantiate the class, there is no custom constructor. I don't believe it is the code itself, but some configuration issue. The SSJS code is in the beforeRenderResponse event where it should be, I haven't changed anything on the xAgent.
I have copied the jar files from Paul's sample database to mine, I have verified that the build paths are the same between the two databases. Everything compiles fine (after I did all this.) This exception appears to be an xpages only exception.
Here's what's really going on with this error:
XPages are essentially servlets... everything that happens in an XPage is just layers on top of a servlet engine. There are basically two types of data that a servlet can send back to whatever is initiating the connection (e.g. a browser): text and binary.
An ordinary XPage sends text -- specifically, HTML. Some xAgents also send text, such as JSON or XML. In any of these scenarios, however, Domino uses a Java Writer to send the response content, because Writers are optimized for sending Character data.
When we need to send binary content, we use an OutputStream instead, because streams are optimized for sending generic byte data. So if we're sending PDF, DOC/XLS/PPT, images, etc., we need to use a stream, because we're sending binary data, not text.
The catch (as you'll soon see, that's a pun) is that we can only use one per response.
Once any HTTP client is told what the content type of a response is, it makes assumptions about how to process that content. So if you tell it to expect application/pdf, it's expecting to only receive binary data. Conversely, if you tell it to expect application/json, it's expecting to only receive character data. If the response includes any data that doesn't match the promised content type, that nearly always invalidates the entire response.
So Domino in its infinite wisdom protects us from making this mistake by only allowing us to send one or the other in a single request, and throws an exception if we disobey that rule.
Unfortunately... if there's any exception in our code when we're trying to send binary content, Domino wants to report that to the consumer... which tries to invoke the output writer to send HTML reporting that something went wrong. Except we already got a handle on the output stream, so Domino isn't allowed to get a handle on the output writer, because that would violate its own rule against only using one per response. This, in turn, throws the exception you reported, masking the exception that actually caused the problem (in your case, probably a ClassNotFoundException).
So how do we make sure that we see the real problem, and not this misdirection? We try:
try {
/*
* Move all your existing code here...
*/
} catch (e) {
print("Error generating dynamic PDF: " + e.toString());
} finally {
facesContext.responseComplete();
}
There are two reasons this is a preferred approach:
If something goes wrong with our code, we don't let Domino throw an exception about it. Instead, we log it (instead of using print to send it to the console and log, you could also toss it to OpenLog, or whatever your preferred logging mechanism happens to be). This means that Domino doesn't try to report the error to the user, because we've promised that we already reported it to ourselves.
By moving the crucial facesContext.responseComplete() call (which is what ultimately tells Domino not to send any content of its own) to the finally block, this ensures it will get executed. If we left it inside the try block, it would get skipped if an exception occurs, because we'd skip straight to the catch... so even though Domino isn't reporting our exception because we caught it, it still tries to invoke the response writer because we didn't tell it not to.
If you follow the above pattern, and something's wrong with your code, then the browser will receive an incomplete or corrupt file, but the log will tell you what went wrong, rather than reporting an error that has nothing to do with the root cause of the problem.
I almost deleted this question, but decided to answer it myself since there is very little out on google when you search for the exception.
The issue was in the xAgent, there is a line importPackage that was incorrect. Fixing this made everything work. The exception verbage: "Can't get a Writer while an OutputStream is already in use" is quite misleading. I don't know what else triggers this exception, but an alternative description would be "Java class ??yourClass?? not found"
If you found this question, then you likely have the same issue. I would ignore what the exception actually says, and check your package statements throughout your application. The java code will error on its own, but your SSJS that references the java will not error until runtime, focus on that code.
Update the response header after the body can solve this kind of problem, example :
HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse();
response.getWriter().write("<html><body>...</body></html>");
response.setContentType("text/html");
response.setHeader("Cache-Control", "no-cache");
response.setCharacterEncoding("UTF-8");
So here are 2 requests:
http://example.com/someUrl/
http://example.com/someUrl/index.xhtml (xhtml extension is not relevant just an example)
When the <welcome-file>index.xhtml</welcome-file> is been set, request 1 is handled by the server as 2.
However, in both cases the request.getRequestURI() returns the complete URI: someUrl/index.xhtml.
According to documentation it shouldn't but in most cases it's what we want so it seems fine it does.
I'm working with JSF under JBoss Wildfly (Undertow webservice) and I don't know which one is responsible.
I don't necessarily want to change how it works but I'm looking for a way of getting the original URI as the enduser sees in browser address bar, thus without the index.xhtml part in case of 1.
To be more precise, I have to get the exact same URL as returned by document.location.href in JavaScript.
The welcome file is been displayed by a forward which is under the server's covers been performed by RequestDispatcher#forward(). In that case, the original request URI is available as a request attribute with a key as identified by RequestDispatcher#FORWARD_REQUEST_URI, which is javax.servlet.forward.request_uri.
So, this should do:
String originalURI = request.getAttribute(RequestDispatcher.FORWARD_REQUEST_URI);
if (originalURI == null) {
originalURI = request.getRequestURI();
}
// ...
I have very little to go on here. I can't reproduce this locally, but when users get the error I get an automatic email exception notification:
Invalid length for a Base-64 char array.
at System.Convert.FromBase64String(String s)
at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString)
at System.Web.UI.ObjectStateFormatter.System.Web.UI.IStateFormatter.Deserialize(String serializedState)
at System.Web.UI.Util.DeserializeWithAssert(IStateFormatter formatter, String serializedState)
at System.Web.UI.HiddenFieldPageStatePersister.Load()
I'm inclined to think there is a problem with data that is being assigned to viewstate.
For example:
List<int> SelectedActionIDList = GetSelectedActionIDList();
ViewState["_SelectedActionIDList"] = SelectedActionIDList;
It's difficult to guess the source of the error without being able to reproduce the error locally.
If anyone has had any experience with this error, I would really like to know what you found out.
After urlDecode processes the text, it replaces all '+' chars with ' ' ... thus the error. You should simply call this statement to make it base 64 compatible again:
sEncryptedString = sEncryptedString.Replace(' ', '+');
I've seen this error caused by the combination of good sized viewstate and over aggressive content-filtering devices/firewalls (especially when dealing with K-12 Educational institutions).
We worked around it by storing Viewstate in SQL Server. Before going that route, I would recommend trying to limit your use of viewstate by not storing anything large in it and turning it off for all controls which do not need it.
References for storing ViewState in SQL Server:
MSDN - Overview of PageStatePersister
ASP Alliance - Simple method to store viewstate in SQL Server
Code Project - ViewState Provider Model
My guess is that something is either encoding or decoding too often - or that you've got text with multiple lines in.
Base64 strings have to be a multiple of 4 characters in length - every 4 characters represents 3 bytes of input data. Somehow, the view state data being passed back by ASP.NET is corrupted - the length isn't a multiple of 4.
Do you log the user agent when this occurs? I wonder whether it's a badly-behaved browser somewhere... another possibility is that there's a proxy doing naughty things. Likewise try to log the content length of the request, so you can see whether it only happens for large requests.
Try this:
public string EncodeBase64(string data)
{
string s = data.Trim().Replace(" ", "+");
if (s.Length % 4 > 0)
s = s.PadRight(s.Length + 4 - s.Length % 4, '=');
return Encoding.UTF8.GetString(Convert.FromBase64String(s));
}
int len = qs.Length % 4;
if (len > 0) qs = qs.PadRight(qs.Length + (4 - len), '=');
where qs is any base64 encoded string
As others have mentioned this can be caused when some firewalls and proxies prevent access to pages containing a large amount of ViewState data.
ASP.NET 2.0 introduced the ViewState Chunking mechanism which breaks the ViewState up into manageable chunks, allowing the ViewState to pass through the proxy / firewall without issue.
To enable this feature simply add the following line to your web.config file.
<pages maxPageStateFieldLength="4000">
This should not be used as an alternative to reducing your ViewState size but it can be an effective backstop against the "Invalid length for a Base-64 char array" error resulting from aggressive proxies and the like.
This isn't an answer, sadly. After running into the intermittent error for some time and finally being annoyed enough to try to fix it, I have yet to find a fix. I have, however, determined a recipe for reproducing my problem, which might help others.
In my case it is SOLELY a localhost problem, on my dev machine that also has the app's DB. It's a .NET 2.0 app I'm editing with VS2005. The Win7 64 bit machine also has VS2008 and .NET 3.5 installed.
Here's what will generate the error, from a variety of forms:
Load a fresh copy of the form.
Enter some data, and/or postback with any of the form's controls. As long as there is no significant delay, repeat all you like, and no errors occur.
Wait a little while (1 or 2 minutes maybe, not more than 5), and try another postback.
A minute or two delay "waiting for localhost" and then "Connection was reset" by the browser, and global.asax's application error trap logs:
Application_Error event: Invalid length for a Base-64 char array.
Stack Trace:
at System.Convert.FromBase64String(String s)
at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString)
at System.Web.UI.Util.DeserializeWithAssert(IStateFormatter formatter, String serializedState)
at System.Web.UI.HiddenFieldPageStatePersister.Load()
In this case, it is not the SIZE of the viewstate, but something to do with page and/or viewstate caching that seems to be biting me. Setting <pages> parameters enableEventValidation="false", and viewStateEncryption="Never" in the Web.config did not change the behavior. Neither did setting the maxPageStateFieldLength to something modest.
Take a look at your HttpHandlers. I've been noticing some weird and completely random errors over the past few months after I implemented a compression tool (RadCompression from Telerik). I was noticing errors like:
System.Web.HttpException: Unable to validate data.
System.Web.HttpException: The client disconnected.---> System.Web.UI.ViewStateException: Invalid viewstate.
and
System.FormatException: Invalid length for a Base-64 char array.
System.Web.HttpException: The client disconnected. ---> System.Web.UI.ViewStateException: Invalid viewstate.
I wrote about this on my blog.
This is because of a huge view state, In my case I got lucky since I was not using the viewstate. I just added enableviewstate="false" on the form tag and view state went from 35k to 100 chars
During initial testing for Membership.ValidateUser with a SqlMembershipProvider, I use a hash (SHA1) algorithm combined with a salt, and, if I changed the salt length to a length not divisible by four, I received this error.
I have not tried any of the fixes above, but if the salt is being altered, this may help someone pinpoint that as the source of this particular error.
As Jon Skeet said, the string must be multiple of 4 bytes. But I was still getting the error.
At least it got removed in debug mode. Put a break point on Convert.FromBase64String() then step through the code. Miraculously, the error disappeared for me :) It is probably related to View states and similar other issues as others have reported.
In addition to #jalchr's solution that helped me, I found that when calling ATL::Base64Encode from a c++ application to encode the content you pass to an ASP.NET webservice, you need something else, too. In addition to
sEncryptedString = sEncryptedString.Replace(' ', '+');
from #jalchr's solution, you also need to ensure that you do not use the ATL_BASE64_FLAG_NOPAD flag on ATL::Base64Encode:
BOOL bEncoded = Base64Encode(lpBuffer,
nBufferSizeInBytes,
strBase64Encoded.GetBufferSetLength(base64Length),
&base64Length,ATL_BASE64_FLAG_NOCRLF/*|ATL_BASE64_FLAG_NOPAD*/);