I am using the jsp-servlet in my application. and deployed the war on jboss 7.0.2 server. i have servlet have code related to database and that is being called many time in sec (say 500 times). but it is falling over for such many threads, jboss 7.0.2 will not able to handle this threads. server (jboss7.0.2) throws an exception.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
Here is my servlet
public class Test extends HttpServlet {
private static final long serialVersionUID = 1L;
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
processRequest(request, response);
}
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
processRequest(request, response);
}
public void processRequest(HttpServletRequest request,
HttpServletResponse response) {
Logger log=LoggerFactory.getLogger(Test.class);
/* here is my code to insert the data in database. */
TestClass testobj = new TestClass();
testobj.setparam("");
smsmanager1.add(sms);
smsmanager1 = null;
sms = null;
}
}
code for add method
public void add(T obj) {
SessionFactory sessionFactory = HibernateUtil.getSessionFactory();
Session session=sessionFactory.openSession();
Transaction transaction = null;
try {
transaction = session.beginTransaction();
session.save(obj);
transaction.commit();
session.flush();
} catch (HibernateException e) {
if(transaction!=null){
transaction.rollback();}
e.printStackTrace();
} finally {
if(session!=null){
session.close();}
session = null;
transaction = null;
}
i have tested for the blank servlet that has the only one console printing statement. it works fine but it not work for above servlet.
am i on the right track here?
how the server will handle the such servlet for above 500-800 threads?
Related
I need to improve logging in a JavaEE application running on wildfly using jboss logger & logstash, I'm using MDC to store userID but as I'm new with thread usage I'm not figuring out how to clear the MDC before a thread is recycled
I have found different ways to clear the MDC but I think I am missing some pieces of knowledge regarding threads ... :
I've tried to extend Thread :
public class MdcThread extends Thread {
LoggingTools loggingTools = new LoggingTools(MdcThread.class);
#Override
public void run() {
loggingTools.info("MdcThread");
MDC.clear();
}
}
I've tried to extend ThreadPoolExecutor :
public class MdcThreadPoolExecutor extends ThreadPoolExecutor {
static LoggingTools loggingTools = new LoggingTools(MdcThreadPoolExecutor.class);
...constructors...
#Override
public void execute(Runnable command) {
super.execute(wrap(command));
}
public static Runnable wrap(final Runnable runnable) {
return new Runnable() {
#Override
public void run() {
try {
runnable.run();
} finally {
loggingTools.info("Mdc clear");
MDC.clear();
}
}
};
}
}
But none of these are called ... So I assume ThreadPoolExecutor is a way of using thread but not necessarily used? how can I reach the lifecycle of the threads?
EDIT :
Here is the filter I've used :
#WebFilter("/*")
public class MdcFilter implements Filter {
#Override
public void init(FilterConfig filterConfig) throws ServletException {
// TODO Auto-generated method stub
}
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
if (request != null) {
//add what I want in MDC
}
chain.doFilter(request, response);
}
#Override
public void destroy() {
MDC.clear();
}
}
If you're using logback then WildFly or JBoss Log Manager will not be managing MDC. Most implementations of MDC, and I assume you're using org.slf4j.MDC since you're using logback, are thread locals so MDC.clear() will only clear the map on that threads MDC map. Have a look at slf4j's MDC manual.
If you want to clear the message diagnostic context you need to do it in the same thread that adds that adds the data you want cleared.
#GET("product/allProduct.json;cakephp={session_key}")
Call<Product> getProductData(#Path("cakephp") String session_Key);
Call<Product> call=apiInterface.getAssetsData(session_key);
call.enqueue(new Callback<Product>() {
#Override
public void onResponse(Call<Product> call, Response<Product> response) {
Log.d("TAG","success"+new Gson().toJson(response.body()));
}
#Override
public void onFailure(Call<Product> call, Throwable t) {
Log.d(TAG, "onResponseFail ");
}
});
I am getting response null from above code,please provide some inputs if i am missing anything.
My image files are stored in database (I know they shouldn't be, but can't help).
To be able to render them on clients, I've implemented an async servlet that helps read the binary stream off the database column and write on to the Output Stream of Servlet Response. Traditional IO works just fine here.
When I thought of trying the non blocking IO with async servlet (to test performance), my binary data returned in the response keeps getting corrupted.
Starting with the Oracle Blog, I've seen various examples of file upload with async NIO servlet, but no help with my issue.
Here's the servlet code:
#WebServlet(asyncSupported = true, urlPatterns = "/myDownloadServlet")
public class FileRetrievalServletAsyncNIO extends HttpServlet
{
private static final long serialVersionUID = -6914766655133758332L;
#Override
protected void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
Queue<byte[]> containerQueue = new LinkedList<byte[]>();
AsyncContext asyncContext = request.startAsync();
asyncContext.addListener(new AsyncListenerImpl());
asyncContext.setTimeout(120000);
try
{
long attachmentId = Long.valueOf(request.getParameter("id"));
MyAttachmentDataObject retObj = ServletUtils.fetchAttachmentHeaders(attachmentId);
response = (HttpServletResponse) asyncContext.getResponse();
response.setHeader("Content-Length", String.valueOf(retObj.getContentLength()));
if (Boolean.valueOf(request.getParameter(ServletConstants.REQ_PARAM_ENABLE_DOWNLOAD)))
response.setHeader("Content-disposition", "attachment; filename=" + retObj.getName());
response.setContentType(retObj.getContentType());
ServletOutputStream sos = response.getOutputStream();
ServletUtils.fetchContentStreamInChunks(attachmentId, containerQueue); // reads from database and adds to the queue in chunks
sos.setWriteListener(new WriteListenerImpl(sos, containerQueue, asyncContext));
}
catch (NumberFormatException | IOException exc)
{
exc.printStackTrace();
request.setAttribute("message", "Failed");
}
}
}
Here's the write listener implementation
public class WriteListenerImpl implements WriteListener
{
private ServletOutputStream output = null;
private Queue<byte[]> queue = null;
private AsyncContext asyncContext = null;
private HttpServletRequest request = null;
private HttpServletResponse response = null;
public WriteListenerImpl(ServletOutputStream sos, Queue<byte[]> q, AsyncContext aCtx)
{
output = sos;
queue = q;
asyncContext = aCtx;
request = (HttpServletRequest) asyncContext.getRequest();
}
#Override
public void onWritePossible() throws IOException
{
while(output.isReady())
{
while (!queue.isEmpty())
{
byte[] temp = queue.poll();
output.write(temp, 0, temp.length);
}
asyncContext.complete();
request.setAttribute("message", "Success");
}
}
#Override
public void onError(Throwable t)
{
System.err.println(t);
try
{
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
}
catch (IOException exc)
{
exc.printStackTrace();
}
request.setAttribute("message", "Failure");
asyncContext.complete();
}
}
The response data looks like this:
What am I doing wrong?
Not sure exactly what you expect the output to look like but in terms of async i/o you should check output.isReady() before every write. So your onWritePossible code should be:
while(output.isReady() && !queue.isEmpty())
{
byte[] temp = queue.poll();
output.write(temp, 0, temp.length);
}
if (queue.isEmpty()) {
asyncContext.complete();
request.setAttribute("message", "Success");
}
This allows onWritePossible() to return if writing becomes blocked which is the basically the point of async I/O.
If you write when writing is blocked (output.isReady() would return false) different implementations may either ignore the write or throw an exception. Either way your output data would be either missing some writes in the middle or truncated.
I'm using JSF 2.2 (Mojarra) and Faclets in a web application. I use a custom ExceptionHandler to handle exceptions. I leverage the JSF implicit navigation system and cause the server to navigate to the 'error.xhtml' page.
public class FrontEndExceptionHandler extends ExceptionHandlerWrapper {
private ExceptionHandler wrapped;
FrontEndExceptionHandler(ExceptionHandler exception) {
this.wrapped = exception;
}
#Override
public ExceptionHandler getWrapped() {
return wrapped;
}
#Override
public void handle() throws FacesException {
final Iterator<ExceptionQueuedEvent> iter = getUnhandledExceptionQueuedEvents().iterator();
while (iter.hasNext()) {
ExceptionQueuedEvent event = iter.next();
ExceptionQueuedEventContext context = (ExceptionQueuedEventContext) event.getSource();
// get the exception from context
Throwable t = context.getException();
final FacesContext fc = FacesContext.getCurrentInstance();
final Flash flash = fc.getExternalContext().getFlash();
final NavigationHandler nav = fc.getApplication().getNavigationHandler();
try {
// redirect error page
flash.put("erorrDetails", t.getMessage());
nav.handleNavigation(fc, null, "/errors/error.xhtml");
fc.renderResponse();
} finally {
// remove it from queue
iter.remove();
}
}
// parent handle
getWrapped().handle();
}
}
This assumes that the exception to be handling is not happening during the Render Response phase. But the exception occurs within an Facelets page also during the Render Response phase.Therefore following code doesn't work correct:
nav.handleNavigation(fc, null, "/errors/error.xhtml");
Does anybody has a Idea how to convey the desired information? Is there a way to navigate to the error.xhtml without using NavigationHandler?
I have a couple questions I would like to ask regarding correct design and concurrency. For the example, I created a simple application that takes parameters via servlet and adds to database. So the process is like so.
1) Send firstname/lastname to servlet
2) Servlet calls PersonDao.createPerson(firstname, lastname).
Classes involved...
PersonDao(Interface)
PersonDaoImpl(Concrete Class)
AbstractDao(Abstract class)
PersonController(Servlet)
I would like to know all your opinions if this is a correctly designed, connection-pooled, code. Is that static creation of the data-source correct? Would you change anything in the AbstractDao class that could pose a concurrency issue?
public interface PersonDao {
public void createPerson(String firstname, String lastname);
}
_
public class PersonDaoImpl extends AbstractDao implements PersonDao {
#Override
public void createPerson(String firstname, String lastname) {
String query = " insert into persons values (?,?) ";
Connection connection = null;
PreparedStatement ps = null;
try {
connection = getConnection();
ps = connection.prepareStatement(query);
ps.setString(1, firstname);
ps.setString(2, lastname);
ps.executeUpdate();
} catch (SQLException e) {
System.out.println(e.toString());
} finally {
close(connection, ps, null);
}
}
}
_
public abstract class AbstractDao {
protected static DataSource dataSource;
static{
try {
dataSource = (DataSource) new InitialContext().lookup("java:comp/env/jdbc/MyDataSource");
} catch (NamingException e) {
throw new ExceptionInInitializerError("jdbc/MyDataSource' not found in JNDI");
}
}
protected Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
protected void close(Connection connection) {
close(connection, null, null);
}
protected void close(Connection connection, Statement ps) {
close(connection, ps, null);
}
protected void close(Connection connection, Statement ps, ResultSet rs) {
try {
if (rs != null)
rs.close();
if (ps != null)
ps.close();
if (connection != null)
connection.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
-
#WebServlet("/PersonController")
public class PersonController extends HttpServlet {
private static final long serialVersionUID = 1L;
public PersonController() {
super();
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String firstname = request.getParameter("firstname");
String lastname = request.getParameter("lastname");
PersonDao personDao = new PersonDaoImpl();
personDao.createPerson(firstname, lastname);
}
}
My other question is if there are concurrency issues here, specifically in the servlet. Imagine 1000 requests simultaneously hitting the servlet. What worries me is the PersonDaoImpl.
1000 different threads and each gets it's own stack. So 1000 different instances of PersonDaoImpl. If we go to AbstractDao, it calls getConnection on the datasource.
So questions would be...
Does the getConnection() pose a concurrency issue?
Can the 1000 different requests pose a threat to the datasource object from the above code?
What if there was a private PersonDao personDao = new PersonDaoImpl() as an instance in the servlet. Now what happens?
What I'm really confused on is what is happening inside the doGet when the PersonDaoImpl is instantiated. Can someone give me a walkthrough please. The gist of my question is if the code I have up there is thread-safe.
Ironically, I just answered a question from October exactly like this.
See my answer here.