I am using Primefaces to provide file download action in JSF. The code I am using is:
<p:commandButton id="xlsExport" value="Export XLS"
ajax="false"
onclick="PrimeFaces.monitorDownload(startPleaseWaitMonitor, stopPleaseWaitMonitor);">
<p:fileDownload value="#{SampleBean.XLSExport}" />
</p:commandButton>
SampleBean has the following method:
public StreamedContent getXLSExport() {
...
byte[] content = generator.generateXLS();
return new DefaultStreamedContent(new ByteArrayInputStream(content), "application/vnd.ms-excel", fileName, "UTF-8");
}
I am using it on two application servers - JBoss and Websphere. In case of Websphere I see warning in server logs when I do export:
000000f5 SRTServletRes W
com.ibm.ws.webcontainer.srt.SRTServletResponse setStatus WARNING:
Cannot set status. Response already committed.
When I run similar method but for CSV export there is no warning. For JBoss there is no warning too.
What could be the reason for such log warning?
I've recreated this locally - it appears that PrimeFaces' FileDownloadActionListener is attempting to set the response status code after that response has already been committed by the server. The FileDownload code grabs the response output stream, writes the entire contents of the downloaded file to it, and then attempts to update the response status code.
WebSphere commits and flushes a response when the amount of data passed into the response buffer exceeds a certain threshold (by default 32K.) Once the response has been committed, its headers (eg. status code) can't be updated. The other application servers probably behave the same way here - they just might not log a warning message. In this particular case the warning isn't anything to worry about, since the FileDownload code was just attempting to update the status code from 200 -> 200.
Using different content types (like CSV) shouldn't make a difference here. File size does make a difference - if a file is downloaded that's less than the response buffer size, then the response won't be committed before the PrimeFaces code tries to set its status.
A simple fix for this warning message would be to check to see if the response is committed before attempting to change its status. I've opened a PrimeFaces issue for this: https://github.com/primefaces/primefaces/issues/3955
Update: I provided a fix to PrimeFaces, so you shouldn't see this anymore in the nightly builds/next version.
Related
I have a random bug in my JSF app that strips all my HTTP parameters. It happens randomly and I can't get any error messages (even when following BalusC's advice here and here).
I can't pinpoint the cause and fix it so I'm wondering if another solution is possile: forcing the request to be resent if all my parameters are empty. Is there a way to make the browser resend its request? For example, through a JSF or HTTP error code.
EDIT: Cleaned up unnecessary code.
In the end, I followed #Xtreme Biker's advice and instead built a filter. It checks if parameters are present, otherwise it sends a redirect response (HTTP error code 307). Then the browser send back the same request which proceeds through.
Something like:
String formWebContainerWidth = httpRequest.getParameter("myParam");
if(formWebContainerWidth == null){
// SC_TEMPORARY_REDIRECT = 307
httpResponse.setStatus(HttpServletResponse.SC_TEMPORARY_REDIRECT);
httpResponse.setHeader("Location", httpRequest.getRequestURI());
} else {
chain.doFilter(request, response);
}
EDIT: Added the location, otherwise the browser sometimes displays a "page cannot be reached" error message.
I am trying to implement Paul Calhoun's Apache FOP solution for creating PDF's from Xpages (from Notes In 9 #102). I am getting the following java exception when trying to run the xAgent that does the processing --> Can't get a Writer while an OutputStream is already in use
The only changes that I have done from Paul's code was to change the package name. I have isolated when the exception happens to the SSJS line: var jce: DominoXMLFO2PDF = new DominoXMLFO2PDF(); All that line does is instantiate the class, there is no custom constructor. I don't believe it is the code itself, but some configuration issue. The SSJS code is in the beforeRenderResponse event where it should be, I haven't changed anything on the xAgent.
I have copied the jar files from Paul's sample database to mine, I have verified that the build paths are the same between the two databases. Everything compiles fine (after I did all this.) This exception appears to be an xpages only exception.
Here's what's really going on with this error:
XPages are essentially servlets... everything that happens in an XPage is just layers on top of a servlet engine. There are basically two types of data that a servlet can send back to whatever is initiating the connection (e.g. a browser): text and binary.
An ordinary XPage sends text -- specifically, HTML. Some xAgents also send text, such as JSON or XML. In any of these scenarios, however, Domino uses a Java Writer to send the response content, because Writers are optimized for sending Character data.
When we need to send binary content, we use an OutputStream instead, because streams are optimized for sending generic byte data. So if we're sending PDF, DOC/XLS/PPT, images, etc., we need to use a stream, because we're sending binary data, not text.
The catch (as you'll soon see, that's a pun) is that we can only use one per response.
Once any HTTP client is told what the content type of a response is, it makes assumptions about how to process that content. So if you tell it to expect application/pdf, it's expecting to only receive binary data. Conversely, if you tell it to expect application/json, it's expecting to only receive character data. If the response includes any data that doesn't match the promised content type, that nearly always invalidates the entire response.
So Domino in its infinite wisdom protects us from making this mistake by only allowing us to send one or the other in a single request, and throws an exception if we disobey that rule.
Unfortunately... if there's any exception in our code when we're trying to send binary content, Domino wants to report that to the consumer... which tries to invoke the output writer to send HTML reporting that something went wrong. Except we already got a handle on the output stream, so Domino isn't allowed to get a handle on the output writer, because that would violate its own rule against only using one per response. This, in turn, throws the exception you reported, masking the exception that actually caused the problem (in your case, probably a ClassNotFoundException).
So how do we make sure that we see the real problem, and not this misdirection? We try:
try {
/*
* Move all your existing code here...
*/
} catch (e) {
print("Error generating dynamic PDF: " + e.toString());
} finally {
facesContext.responseComplete();
}
There are two reasons this is a preferred approach:
If something goes wrong with our code, we don't let Domino throw an exception about it. Instead, we log it (instead of using print to send it to the console and log, you could also toss it to OpenLog, or whatever your preferred logging mechanism happens to be). This means that Domino doesn't try to report the error to the user, because we've promised that we already reported it to ourselves.
By moving the crucial facesContext.responseComplete() call (which is what ultimately tells Domino not to send any content of its own) to the finally block, this ensures it will get executed. If we left it inside the try block, it would get skipped if an exception occurs, because we'd skip straight to the catch... so even though Domino isn't reporting our exception because we caught it, it still tries to invoke the response writer because we didn't tell it not to.
If you follow the above pattern, and something's wrong with your code, then the browser will receive an incomplete or corrupt file, but the log will tell you what went wrong, rather than reporting an error that has nothing to do with the root cause of the problem.
I almost deleted this question, but decided to answer it myself since there is very little out on google when you search for the exception.
The issue was in the xAgent, there is a line importPackage that was incorrect. Fixing this made everything work. The exception verbage: "Can't get a Writer while an OutputStream is already in use" is quite misleading. I don't know what else triggers this exception, but an alternative description would be "Java class ??yourClass?? not found"
If you found this question, then you likely have the same issue. I would ignore what the exception actually says, and check your package statements throughout your application. The java code will error on its own, but your SSJS that references the java will not error until runtime, focus on that code.
Update the response header after the body can solve this kind of problem, example :
HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse();
response.getWriter().write("<html><body>...</body></html>");
response.setContentType("text/html");
response.setHeader("Cache-Control", "no-cache");
response.setCharacterEncoding("UTF-8");
I know writing business logic in getters and setters is a very bad programming practice, but is there any way to handle exceptions if the response is already committed?
What exactly is the meaning of "Response already committed" and "Headers are already sent to the client"?
There's no nice way to handle exceptions if the response is already committed. The HTTP response exist basically of a header and a body. The headers basically instruct the client (the webbrowser) how exactly it should deal with the response, e.g. the content type, the content length, the character encoding, the body encoding, the cache instructions, etcetera.
You can see the headers in the HTTP traffic monitor of the webbrowser's developer toolset. Press F12 in Chrome/IE9+/Firefox23+ and check the "Network" tab. The below screenshow is what my Chrome shows on your current question:
(note: the "Response" tab shows the response body)
The response body is the actual content, usually in flavor of a bunch of HTML code. The server has usually a fixed size buffer to write the response to. The buffer size depends on server make/version and configuration and is usually 2KB~10KB. If this buffer overflows, then it will be flushed to the other end of the connection, the client. This is the commit of a response. The client has already obtained the first part of the response, usually already representing the whole bunch of headers and maybe a part of the body.
The commit of a response is a point of no return. The server cannot take the already sent bytes back. It's too late to change the response headers (for example, a redirect is basically instructed by a Location header with therein the new URL), let alone the response body. Best what you can do is to append the error information to the already written response body. But this may end up in some weird looking HTML as it's not known which HTML tags needs to be closed at that point. The browser may fail to present it in a proper manner.
Apart from avoiding business logic in getters so that the exceptions are not thrown while rendering the response, another way to avoid an already committed response is to configure the response buffer size to be as large as the largest page which your webapp can serve. How to do that depends on the server make/version. In Tomcat for example, you can configure it as bufferSize attribute of the <Connector> element. Note that this won't prevent from flushing if your own code is (implicitly) calling flush() on the response output stream.
Good exlanation BalusC and I would add that primefaces has an issue in their exception handler. They try to redirect to error page after request was already committed. And as you said the only solution I found is to add some extra content to the response body. I owerride the handler and add this code
if ( extContext.isResponseCommitted() ) {
PartialResponseWriter writer = context.getPartialViewContext().getPartialResponseWriter();
writer.startElement( "script", null );
writer.write( "window.location.href = '" + errorPageUrl + "';" );
writer.endElement( "script" );
writer.getWrapped().endCDATA();
writer.endElement( "update" );
writer.getWrapped().endDocument();
}
else {
extContext.redirect( errorPageUrl );
context.responseComplete();
}
I'm running myFaces 2.1.7 and desperatly trying to reduche the mmeory usage of our web application. Basically session size of each users ballons to up to 10MB which we can't handle.
I tried adding the following to Descriptor:
<context-param>
<param-name>org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION</param-name>
<param-value>3</param-value>
</context-param>
result was ok and the session size of any given user wasn't going higher than 1MB. But then alot of users aren't able to login since the change. What is happening is that on login screen ViewExpiredException is thrown , and i wrote a custom viewExpiredException class that redirects the user back to login screen, so essentially they're stuck on login screen loop
try to log in ---> ViewExpiredException thrown ---> Custom ViewExpiredHandler ----> Forward user to Login Screen
<----------------------------------------------------------------------------------------------------------------
Removing the above context-param fixes the issue! My question is one -
1) why is the viewException is thrown when the NUMBER_OF_VIEWS_IN_SESSION is reduced from its default value
2) is it possible to work around the issue by using the custom ViewExpiredHandler class ? how?
3) am i doing something wrong in causing this issue?
P.S. in testing why this is happening, if i open for example IE and try to login all is ok, but then if i try to open another browser (e.g. chrome) and login using another username above issues are encountered mirroring issues reported by users not being able to login.
Also following hints of myFaces wiki i have added the following to descriptor but i don't believe it can contribute to the issue:
<context-param>
<param-name>org.apache.myfaces.SERIALIZE_STATE_IN_SESSION</param-name>
<param-value>false</param-value>
</context-param>
Update:
As suggested by BalusC i change the STATE_SAVING_METHOD to client in order to resolve the memory issues we are having . immediately all UAT testers reported dramatic slow down on all page load speeds so i had to revert back to having STATE_SAVING_METHOD set to server.
Since I've been experimenting with value of org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION and with the current value of 6 (an improvement of 70% over the default 20 sessions) i am not getting the ViewExpiredException errors any more.(reason why this question was created originally)
But in a way i am playing russian roulette. I don't really know why having the value of org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION set to 3/4/5 didn't work and why 6 is working and it bothers me not being able to explain it. Is anyone able to provide information?
I originally asked this question after noticing how the use of browser's BACK button was causing ViewExpired Exception to be thrown.
I had also reduced the default value of 20 for org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION to 3 and then 6 (in trying memory foot print) which aggravated the problem.
Having wrote a ViewExpired Exception Handler, i was now looking for a way to do something about the Exception when it was thrown and below is how i am dealing with it:
1 - `org.apache.myfaces.NUMBER_OF_VIEWS_IN_SESSION ` value is 6
2 - when ViewExpired Exception is thrown in ViewExpired Handler retrieve the request URL
HttpServletRequest orgRequest = (HttpServletRequest) fc.getExternalContext().getRequest();
3 - insert a boolean into user session (used as flag to be used below)
Map<String, Object> sessionMap = FacesContext.getCurrentInstance()
.getExternalContext().getSessionMap();
sessionMap.put("refreshedMaxSesion",true);
4 - redirect back to URL retrieved in step 2
fc.getExternalContext().redirect(orgRequest.getRequestURI());
5 - Since page is reloaded , see if the flag in step 3 exists and if so flag a notification to be shown to user and remove from session
if(sessionMap.containsKey("refreshedMaxSesion")){
showPageForceRefreshed = true;
sessionMap.remove("refreshedMaxSesion");
}
6 - I'm using Pines Notify to show subtle "FYI, this page way refreshed" type message".
<h:outputText rendered="#{ myBean.showPageForceRefreshed }" id="PageWasRefreshedInfoId" >
<script type="text/javascript">
newjq(document).ready(function(){
newjq.pnotify({
pnotify_title: 'Info',
pnotify_text: 'Ehem, Page was Refreshed.' ,
pnotify_mouse_reset: false,
pnotify_type: 'info',
pnotify_delay: 5000
});
});
</script>
</h:outputText>
I am getting the following errors in SystemOut logs:
[11/4/11 2:53:13:876 ZZZ] 00000245 srt W
com.ibm.ws.webcontainer.srt.SRTServletResponse setStatus WARNING:
Cannot set status. Response already committed.
[11/4/11 2:53:13:876 ZZZ] 00000245 srt W
com.ibm.ws.webcontainer.srt.SRTServletResponse addHeader WARNING:
Cannot set header. Response already committed.
A bit of searching got me here: http://www-01.ibm.com/support/docview.wss?uid=swg21316420
The solution talked about here says that we should disable "Cookie Acceptance test". But i am not able to find out where exactly is that checkbox in admin console.
The technote you are referring to is for a specific application (WebSphere Commerce). If you are getting these warnings with your own application, then the technote doesn't apply. What these warnings mean is that you have a JSP or servlet that calls setStatus or addHeader after too much output has already been written to the response. You need to determine where this happens and either fix your code or increase the output buffer size.