Connection reset message from JMeter - iis

When i ran my test I got the below error no the response section of View Result Tree.
I was testing a IIS server.
Is this Connection reset created by JMeter of was it done IIS server.
How do you interpret this error message.
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:166)
at org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:90)
at org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:281)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:92)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:61)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:475)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:295)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:74)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1105)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1094)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:429)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:257)
at java.lang.Thread.run(Unknown Source)

There are at least 3 possibles reasons:
Most probable cause : Your server (meaning web servers handling request and any components after them) is not handling the load correctly and slowing down, monitor the system and check
You have exhausted your injector ephemeral ports , you need to adjust your OS TCP settings to increase port range
You're running load test in GUI mode with a View Results Tree in test, this is bad practice as GC will happen frequently possibly triggering Stop The World leading to this. As per best-practices use NON GUI mode:
https://jmeter.apache.org/usermanual/best-practices.html
https://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/

Related

Apache pulsar get timeout in unpredictable way

I installed Apache pulsar standalone. Pulsar get timeout sometimes. It's not related to high throuput neither to a particular topic (following log). Pulsar-admin brokers healthcheck returns OK or timeout also. How to investigate on it ?
10:46:46.365 [pulsar-ordered-OrderedExecutor-7-0] WARN org.apache.pulsar.broker.service.BrokerService - Got exception when reading persistence policy for persistent://nnx/agent_ns/action_up-53da8177-b4b9-4b92-8f75-efe94dc2309d: null
java.util.concurrent.TimeoutException: null
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784) ~[?:1.8.0_232]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) ~[?:1.8.0_232]
at org.apache.pulsar.zookeeper.ZooKeeperDataCache.get(ZooKeeperDataCache.java:97) ~[org.apache.pulsar-pulsar-zookeeper-utils-2.5.0.jar:2.5.0]
at org.apache.pulsar.broker.service.BrokerService.lambda$getManagedLedgerConfig$32(BrokerService.java:922) ~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
at org.apache.bookkeeper.mledger.util.SafeRun$2.safeRun(SafeRun.java:49) [org.apache.pulsar-managed-ledger-2.5.0.jar:2.5.0]
at org.apache.bookkeeper.common.util.SafeRunnable.run(SafeRunnable.java:36) [org.apache.bookkeeper-bookkeeper-common-4.10.0.jar:4.10.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty-netty-common-4.1.43.Final.jar:4.1.43.Final]
I am glad you were able to resolve the issue my adding more cores. The issue was a connection timeout while trying to access some topic metadata that is stored inside of ZookKeeper as indicated by the following line in the stack trace:
at org.apache.pulsar.zookeeper.ZooKeeperDataCache.get(ZooKeeperDataCache.java:97) ~[org.apache.pulsar-pulsar-zookeeper-utils-2.5.0.jar:2.5.0]
Increasing the cores must of freed up enough threads to allow the ZK node to respond to this request.
You can check the connection to the server looks like connection issue if you are using any TLScertificate file path check if you have the right certificate.
The Problem is we don't have lot of solutions in internet for apache pulsar but if you are following the apache pulsar doc might help and also we have apache pulsar git hub and sample projects.

Connection to MirthDB in Azure

I am running Mirth 3.7.1 on a VM within Azure. The Mirth database is on a SQL Server managed instance within the same Azure subscription. I have several channels which consume ADT/ORM messages which seem to be working as expected, however, I also have a File Reader channel which reads PDF files from disk and sends them as MDM messages. This channel is intermittently erroring (see stack traces below) in what appears to me to be with its connection to the Mirth DB. I am assuming that this is due to the fact that it is attempting to save out the larger file data as it moves through the steps in the channel since the ADT/ORM channels are not having the same issue. We had this same channel running in a traditional environment and we did not see this same problem. Any thoughts on how to resolve this issue?
Also, I have alerts configured to send email when an error occurs. I am recieving these when the error is within the channel, but I am not being notified of these internal Mirth errors. Is there any way that I can be notified?
Mike
com.mirth.connect.donkey.server.channel.ChannelException: com.mirth.connect.donkey.server.data.DonkeyDaoException: java.sql.SQLException: I/O Error: Connection reset
at com.mirth.connect.donkey.server.channel.Channel.dispatchRawMessage(Channel.java:1213)
at com.mirth.connect.donkey.server.channel.SourceConnector.dispatchRawMessage(SourceConnector.java:192)
at com.mirth.connect.donkey.server.channel.SourceConnector.dispatchRawMessage(SourceConnector.java:170)
at com.mirth.connect.connectors.file.FileReceiver.processFile(FileReceiver.java:354)
at com.mirth.connect.connectors.file.FileReceiver.processFiles(FileReceiver.java:247)
at com.mirth.connect.connectors.file.FileReceiver.poll(FileReceiver.java:203)
at com.mirth.connect.donkey.server.channel.PollConnectorJob.execute(PollConnectorJob.java:49)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)Caused by: com.mirth.connect.donkey.server.data.DonkeyDaoException: java.sql.SQLException: I/O Error: Connection reset
at com.mirth.connect.donkey.server.data.jdbc.JdbcDao.insertContent(JdbcDao.java:274)
at com.mirth.connect.donkey.server.data.jdbc.JdbcDao.insertMessageContent(JdbcDao.java:193)
at com.mirth.connect.donkey.server.data.buffered.BufferedDao.executeTasks(BufferedDao.java:110)
at com.mirth.connect.donkey.server.data.buffered.BufferedDao.commit(BufferedDao.java:85)
at com.mirth.connect.donkey.server.data.buffered.BufferedDao.commit(BufferedDao.java:72)
at com.mirth.connect.donkey.server.channel.Channel.dispatchRawMessage(Channel.java:1185)
... 8 moreCaused by: java.sql.SQLException: I/O Error: Connection reset
at net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:1093)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQL(JtdsStatement.java:563)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeUpdate(JtdsPreparedStatement.java:727)
at com.mirth.connect.donkey.server.data.jdbc.JdbcDao.insertContent(JdbcDao.java:271)
... 13 moreCaused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.io.DataInputStream.readFully(Unknown Source)
at java.io.DataInputStream.readFully(Unknown Source)
at net.sourceforge.jtds.jdbc.SharedSocket.readPacket(SharedSocket.java:850)
at net.sourceforge.jtds.jdbc.SharedSocket.getNetPacket(SharedSocket.java:731)
at net.sourceforge.jtds.jdbc.ResponseStream.getPacket(ResponseStream.java:477)
at net.sourceforge.jtds.jdbc.ResponseStream.read(ResponseStream.java:114)
at net.sourceforge.jtds.jdbc.ResponseStream.peek(ResponseStream.java:99)
at net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:4127)
at net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:1086)
... 16 more
Azure Monitor has capabilities to monitor Azure VM health but only the following perf counters are included for Windows VMs and Linux VMs.
However, Using Azure Monitor for VMs (preview) Map to understand application components there is an ability to create application maps that allow you to monitor specific aspects of an application environment and trigger alerts. For example, you can set-up a map for failed connections for both processes and connections.

Spark scheduler thersholds

I'm running on top of Spark some analysis tool that creates plenty of overhead, so computations takes a lot more time. When I run it I get this error:
16/08/30 23:36:37 WARN TransportChannelHandler: Exception in connection from /132.68.60.126:36922
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
16/08/30 23:36:37 ERROR TaskSchedulerImpl: Lost executor 0 on 132.68.60.126: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
I guess this happens because the scheduler thinks the executor failed, so it starts another one.
The workload is a simple string search (grep), both master and slave are local so there aren't suppose to be any failures. When running without the overheads things are fine.
The question is - can I configure those timeout thresholds somewhere?
Thanks!
Solved it with spark.network.timeout 10000000 on spark-defaults.conf.
I was getting the same error even if I tried many things.My job used to get stuck throwing this error after running a very long time. I tried few work around which helped me to resolve. Although, I still get the same error by at least my job runs fine.
one reason could be the executors kills themselves thinking that they
lost the connection from the master. I added the below configurations
in spark-defaults.conf file.
spark.network.timeout 10000000
spark.executor.heartbeatInterval 10000000
basically,I have increased the network timeout and heartbeat interval
The particular step which used to get stuck, I just cached the
dataframe that is used for processing (in the step which used to get
stuck)
Note:- These are work arounds, I still see the same error in error logs but the my job does not get terminated.

Datastax java driver session hangs

The following problem has occurred for the second time in few months. The session that tries to open and execute the query using the java driver hangs the particular thread. As a result of this , this particular thread waits forever and causes a thread locking problem. This was resolved using an app server restart . But , one cannot manually intervene for these kind of driver problems . Does anyone have an idea on this?
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:905)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1217)
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:292)
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:181)
com.datastax.driver.core.Session.execute(Session.java:111)
com.datastax.driver.core.Session.execute(Session.java:80)
There is an open ticket on this issue (https://datastax-oss.atlassian.net/browse/JAVA-268). Your best bet would be adding any information you have to that ticket.

nutch crawling gets stuck in spinwaiting or active. how to reduce fetch cycle?

I am using nutch 2.1 and crawling a site. The problem is that the crawler keeps showing fetching url spinwaiting/active and since the fetching takes so much time the connection to mysql gets timedout. How can i reduce the number of fetches at a time so that the mysql does not get timedout?? Is there a setting in nutch where i can say only fetch 100 or 500 urls then parse and store to mysql and then again fetch the next 100 or 500 urls??
Error message:
Unexpected error for http://www.example.com
java.io.IOException: java.sql.BatchUpdateException: The last packet successfully received from the server was 36,928,172 milliseconds ago. The last packet sent successfully to the server was 36,928,172 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at org.apache.gora.sql.store.SqlStore.flush(SqlStore.java:340)
at org.apache.gora.mapreduce.GoraRecordWriter.write(GoraRecordWriter.java:65)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.nutch.fetcher.FetcherReducer$FetcherThread.output(FetcherReducer.java:663)
at org.apache.nutch.fetcher.FetcherReducer$FetcherThread.run(FetcherReducer.java:534)
Caused by: java.sql.BatchUpdateException: The last packet successfully received from the server was 36,928,172 milliseconds ago. The last packet sent successfully to the server was 36,928,172 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:2028)
at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1451)
at org.apache.gora.sql.store.SqlStore.flush(SqlStore.java:328)
... 5 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 36,928,172 milliseconds ago. The last packet sent successfully to the server was 36,928,172 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.GeneratedConstructorAccessor49.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1116)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3364)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1983)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2163)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2427)
at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1980)
... 7 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3345)
... 13 more
I am using nutch 2.1 and crawling a site. The problem is that the
crawler keeps showing fetching url spinwaiting/active and since the
fetching takes so much time the connection to mysql gets timedout. How
can i reduce the number of fetches at a time so that the mysql does
not get timedout?
For reducing number of fetches, you can add the property below to your nutch-site.xml and edit the value based on your need. Please do not edit the nutch-default.xml rather copy the property to nutch-site.xml and manage the value from there:
<property>
<name>fetcher.threads.fetch</name>
<value>20</value>
</property>
Regarding the timeout issue, you can possible add this property to your nutch-site.xml with a value of loading time you think is needed.
<property>
<name>http.timeout</name>
<value>240000</value>
<description>The default network timeout, in milliseconds.</description>
</property>
Is there a setting in nutch where i can say only fetch 100 or 500 urls then parse and store to mysql and then again fetch the next 100 or 500 urls?
Nutch crawls in a cycle with steps - generate/fetch/parse/update in a number of iterations called 'depth' which you specify in your crawl command. If you would like to have a control on your crawling, you can perform each step as described in section 3.2(Using Individual Commands for Whole-Web Crawling) of the tutorial link http://wiki.apache.org/nutch/NutchTutorial. This will give you good direction and understand exactly what is happening. Do check status while fetching each segment so you will know how many urls are being fetched in each segment

Resources