One of my applications is integrated with the mainframe system. Through CICS / CTG. I am facing an error while executing a request.also, i have used ASN1 encoding for request
The error I am getting while executing the request
com.ibm.connector2.cics.CICSUserInputException: CTG9627E IOException occurred when writing to the Output Record
org.springframework.dao.NonTransientDataAccessResourceException: Unable to create a connection to the remote application; nested exception is com.ibm.connector2.cics.CICSUserInputException:
CTG9627E IOException occurred when writing to the Output Record
com.ibm.connector2.cics.CICSUserInputException: CTG9627E IOException occurred when writing to the Output Record
at com.ibm.connector2.cics.ECIManagedConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIInteraction.execute(Unknown Source)
java.io.IOException: messagelength in header greater than existing data length - common area too short?
at com.ibm.connector2.cics.ECIManagedConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIInteraction.execute(Unknown Source)
i am using
cics version : c900-20160704-0205
Does anyone have any insights about this?
Error description is available at https://www.ibm.com/docs/en/cics-tg-multi/9.0?topic=SSZHFX_9.0.0/cclaj/CTG9627E.htm
It seems like the data you are passing is not a isntanceof javax.resource.cci.Streamable. Could you verify that.
Solved the issue with the below resolution
messagelength in header greater than existing data length - common area too short? as per this error message length is short so I have tried to increase length in a common area as per this documentation https://www.ibm.com/docs/en/cics-ts/5.6?topic=applications-transferring-data-between-programs-using-channels
added code in CTG Service executor >> CTG Record
setCommonAreaLength(32500)
After applying this resolution issue is resolved
Hope some one helps this ans
When attempting to connect to either local or SQL DB I get the below error upon starting my server. Because it is calling at OOTB class I havent been able to debug.
lnar-5cg84268sc 2020-01-21 15:07:38,381 ERROR Server.RunLevel ***** PolicyCenter unable to start *****
java.lang.NullPointerException
at gw.api.productmodel.ProductModelDisplayKey.getPath(ProductModelDisplayKey.java:41)
at com.guidewire.pc.api.productmodel.ProductModelObjectBase.verifyDisplayKeyNotEmpty(ProductModelObjectBase.java:647)
at com.guidewire.pc.api.productmodel.ProductModelObjectBase.verifyFields(ProductModelObjectBase.java:587)
at com.guidewire.pc.api.productmodel.AuditSchedulePatternInternal.verifyFields(AuditSchedulePatternInternal.java:187)
at com.guidewire.pc.api.productmodel.ProductModelObjectBase.verify(ProductModelObjectBase.java:523)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.verifyProductModel(ProductModelImpl.java:1685)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.verifyProductModel(ProductModelImpl.java:1640)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.verifyProductModelIfNeeded(ProductModelImpl.java:336)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.lambda$activateVerifyAndLockPatternsIfNeeded$0(ProductModelImpl.java:322)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl$$Lambda$325/406648867.accept(Unknown Source)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.lambda$runWithinTransaction$4(ProductModelImpl.java:2099)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl$$Lambda$326/1957698296.run(Unknown Source)
at com.guidewire.pl.system.transaction.BootstrapTransaction.run(BootstrapTransaction.java:44)
at com.guidewire.pl.system.transaction.TransactionManagerImpl.execute(TransactionManagerImpl.java:109)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.runWithinTransaction(ProductModelImpl.java:2098)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.activateVerifyAndLockPatternsIfNeeded(ProductModelImpl.java:316)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.start(ProductModelImpl.java:237)
at com.guidewire.pl.system.server.InitTab.startDependency(InitTab.java:465)
at com.guidewire.pc.system.server.PCInitTab.applicationEnterNoDaemons(PCInitTab.java:58)
at com.guidewire.pl.system.server.InitTab.enterNoDaemons(InitTab.java:875)
at com.guidewire.pl.system.server.InitTab.increaseRunLevelTo(InitTab.java:650)
at com.guidewire.pl.system.server.InitTab.setRunLevel(InitTab.java:380)
at com.guidewire.pl.system.servlet.GuidewireStartupServlet.init(GuidewireStartupServlet.java:88)
at javax.servlet.GenericServlet.init(GenericServlet.java:244)
at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:540)
at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:349)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:812)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:288)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1322)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:732)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:490)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:118)
at org.eclipse.jetty.server.Server.start(Server.java:342)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:100)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60)
at org.eclipse.jetty.server.Server.doStart(Server.java:290)
at com.guidewire.commons.jetty.GWServerJettyServerMain$JettyServer.doStart(GWServerJettyServerMain.java:83)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1250)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.eclipse.jetty.start.Main.invokeMain(Main.java:509)
at org.eclipse.jetty.start.Main.start(Main.java:651)
at org.eclipse.jetty.start.Main.main(Main.java:99)
at com.guidewire.commons.jetty.GWServerJettyServerMain.main(GWServerJettyServerMain.java:69)
This error might be you are deleted existing OOTB Display key, go to Local History and revert all changes try to restart the PC or see the same changes git and revert back all changes try to start the machine.
I'm using hazelcast jet to perform aggregations on stream data. Problem is, that hazelcast cliend shutsdown unexpectedly.
I've implemented simple pipeline with remote map source and then the result is simply sinked.
// init pipeline
Pipeline p = Pipeline.create();
// configure source
BatchSource remoteBatchMap = Sources.remoteMap(<my remote map>, <my config>);
// add source and sink to pipeline
p.drawFrom(remoteBatchMap).drainTo(Sinks.map(SINK_MAP_NAME));
On a client side, output is as expected for the first cca. 30 seconds. Then shutdown happens, and further on, those printed values freezes. Ok, that is logical, as it has been shut down. But, how to prevent shutdown?
2019-07-25 14:22:18,214 INFO com.betex.service.FixtureOddTotalSummaryImpl [SockJS-2] Number of sink elements vs original (BCK): 254/41254
2019-07-25 14:22:19,359 INFO com.betex.service.FixtureOddTotalSummaryImpl [SockJS-2] Number of sink elements vs original (BCK): 262/41254
2019-07-25 14:22:20,496 INFO com.betex.service.FixtureOddTotalSummaryImpl [SockJS-2] Number of sink elements vs original (BCK): 269/41259
2019-07-25 14:22:20,786 INFO com.hazelcast.logging.StandardLoggerFactory$StandardLogger [hz._hzInstance_1_jet.async.thread-8] betex0.7899090253375379 [app] [3.1] [3.12.1] HazelcastClient 3.12.1 (20190611 - 0a0ee66) is SHUTTING_DOWN
2019-07-25 14:22:20,791 INFO com.hazelcast.logging.StandardLoggerFactory$StandardLogger [hz._hzInstance_1_jet.async.thread-8] betex0.7899090253375379 [app] [3.1] [3.12.1] Removed connection to endpoint: [192.168.41.3]:5701, connection: ClientConnection{alive=false, connectionId=1, channel=NioChannel{/192.168.26.78:64217->/192.168.41.3:5701}, remoteEndpoint=[192.168.41.3]:5701, lastReadTime=2019-07-25 14:22:19.980, lastWriteTime=2019-07-25 14:22:19.855, closedTime=2019-07-25 14:22:20.789, connected server version=3.12.1}
2019-07-25 14:22:20,794 INFO com.hazelcast.logging.StandardLoggerFactory$StandardLogger [hz._hzInstance_1_jet.async.thread-8] betex0.7899090253375379 [app] [3.1] [3.12.1] Removed connection to endpoint: [192.168.41.4]:5701, connection: ClientConnection{alive=false, connectionId=2, channel=NioChannel{/192.168.26.78:64218->/192.168.41.4:5701}, remoteEndpoint=[192.168.41.4]:5701, lastReadTime=2019-07-25 14:22:20.525, lastWriteTime=2019-07-25 14:22:20.376, closedTime=2019-07-25 14:22:20.793, connected server version=3.12.1}
2019-07-25 14:22:20,797 INFO com.hazelcast.logging.StandardLoggerFactory$StandardLogger [hz._hzInstance_1_jet.async.thread-8] betex0.7899090253375379 [app] [3.1] [3.12.1] HazelcastClient 3.12.1 (20190611 - 0a0ee66) is SHUTDOWN
2019-07-25 14:22:20,802 INFO com.hazelcast.logging.StandardLoggerFactory$StandardLogger [hz._hzInstance_1_jet.async.thread-8] [192.168.1.66]:5701 [jet] [3.1] Execution of job '8dc4-d1e2-df66-a444', execution 9622-ba74-b907-150c completed in 42,335 ms
2019-07-25 14:22:21,635 INFO com.betex.service.FixtureOddTotalSummaryImpl [SockJS-2] Number of sink elements vs original (BCK): 41246/41259
2019-07-25 14:22:22,771 INFO com.betex.service.FixtureOddTotalSummaryImpl [SockJS-2] Number of sink elements vs original (BCK): 41246/41259
2019-07-25 14:22:23,909 INFO com.betex.service.FixtureOddTotalSummaryImpl [SockJS-2] Number of sink elements vs original (BCK): 41246/41259
On server side it says that connection is closed by the other side - so, my client side:
2019-07-25 14:22:21.909 INFO 21375 --- [hz.betex.IO.thread-in-2] com.hazelcast.nio.tcp.TcpIpConnection : [192.168.41.3]:5701 [app] [3.1] Connection[id=159, /192.168.41.3:5701->192.168.26.78/192.168.26.78:64217, qualifier=null, endpoint=[192.168.26.78]:64217, alive=false, type=JAVA_CLIENT] closed. Reason: Connection closed by the other side
2019-07-25 14:22:21.910 INFO 21375 --- [hz.betex.event-14] c.h.client.impl.ClientEndpointManager : [192.168.41.3]:5701 [app] [3.1] Destroying ClientEndpoint{connection=Connection[id=159, /192.168.41.3:5701->192.168.26.78/192.168.26.78:64217, qualifier=null, endpoint=[192.168.26.78]:64217, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='c5286586-cbe2-4c84-8e74-4c2f1f59310a', ownerUuid='ebce22c4-ed31-4ccf-9808-b19005dc55f8'}, ownerConnection=true, authenticated=true, clientVersion=3.12.1, creationTime=1564057300564, latest statistics=null}
I'd be very happy to get some orientation and ideas, where to look for a problem.
If you haven't already wrapped the code in a try/catch, I'd try that. I remember running into something similar but can't recall the root cause; it may have been a ClassCastException or something serialization-related. There wasn't any clue in the output but once I added the try/catch and dumped a stack trace the issue was obvious.
The cluster is independent from the clients. Jet client can be used to submit a job and to monitor it, but if the client shuts down, the cluster isn't affected and the job continues to run.
You don't share your code, but you probably shut down the client yourself. You need to fix your code.
I am using spring xd My stream looks like below and running tests on 3 node container with 1 admin node with rabbit as transport
aws-s3|processor1|http-client|processor2>queue:readyQueue
I have created below tap.
tap1 aws-s3>s3Queue
tap2 processor1>processorQueue1
tap3 http-client>httpQueue
I run below scenarios in my tests:
Scenario1: 5 files of 200k =1 Million records
concurrency of http-client=70 and processor2=30
I see 900k message s3Queue
I see 889k message processorQueue1
I see 886k message httpQueue
I see 883k message processorQueue2
Messages are lost everywhere and its random
Scenario2:
5 files of 200k =1 Million records and all module concurrency=1
I see 998800 message s3Queue
I see 998760 message processorQueue1
I see 997540 message httpQueue
I see 997530 message processorQueue2
Even this number is random and not consistent
Scenario3
I changed stream as below and concurrency=1 and 5 files of 200k =1 Million records
aws-s3 >testQueue
I get all my messages I run 3 times and no issues.I get all my 1 million messages
scenario4
I changed stream as below and concurrency=1 5 files of 200k =1 Million records
aws-s3 |processor1 >testQueue2
I get all my messages I run 3 times and no issues.I get all my 1 million messages
In scenario4 and scenarion 3 data ingestion is faster and it took 5 min to process 5 million faster and ingestion was faster in rabbit transport queue like 5k msg per sec
In scenario 1 data ingestion was slower even s3 module was pulling the data very slow like 300 to 1000 msg per sec
In scenario 2 s3 pulled data faster but http client was slow like 100 msg per sec but aws-s3 pulled data fast like 3-4k msg per sec.
I am thinking like seeing xd threading is causing issues and i am losing messages.Please can you help me how to solve this issue.
update
Scenario 5
I changed reply-timeout to -1 in http client and then
I lost only 37 msgs
Now again I run 2nd iteration I lost 25000 msgs i see the bellowing containers log when that happened
2016-03-04T03:42:04-0500 1.2.1.RELEASE ERROR task-scheduler-7 handler.LoggingHandler - org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint#b6700b1]; nested exception is org.springframework.amqp.AmqpIOException: java.io.IOException
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:84)
at org.springframework.xd.dirt.integration.rabbit.RabbitMessageBus$SendingHandler.handleMessageInternal(RabbitMessageBus.java:891)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:287)
at org.springframework.integration.channel.interceptor.WireTap.preSend(WireTap.java:129)
at org.springframework.integration.channel.AbstractMessageChannel$ChannelInterceptorList.preSend(AbstractMessageChannel.java:392)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:282)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:245)
at sun.reflect.GeneratedMethodAccessor204.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.integration.monitor.DirectChannelMetrics.monitorSend(DirectChannelMetrics.java:114)
at org.springframework.integration.monitor.DirectChannelMetrics.doInvoke(DirectChannelMetrics.java:98)
at org.springframework.integration.monitor.DirectChannelMetrics.invoke(DirectChannelMetrics.java:92)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy1537.send(Unknown Source)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:231)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:154)
at org.springframework.integration.splitter.AbstractMessageSplitter.produceOutput(AbstractMessageSplitter.java:157)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:102)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:105)
Caused by: org.springframework.amqp.AmqpIOException: java.io.IOException
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:63)
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:51)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createBareChannel(CachingConnectionFactory.java:758)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.access$300(CachingConnectionFactory.java:747)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.doCreateBareChannel(CachingConnectionFactory.java:419)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createBareChannel(CachingConnectionFactory.java:395)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getCachedChannelProxy(CachingConnectionFactory.java:364)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getChannel(CachingConnectionFactory.java:357)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.access$1100(CachingConnectionFactory.java:75)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createChannel(CachingConnectionFactory.java:763)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$1.createChannel(ConnectionFactoryUtils.java:85)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.doGetTransactionalResourceHolder(ConnectionFactoryUtils.java:134)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactoryUtils.java:67)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:1035)
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:1028)
at org.springframework.amqp.rabbit.core.RabbitTemplate.send(RabbitTemplate.java:540)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:635)
at org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint.send(AmqpOutboundEndpoint.java:331)
at org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint.handleRequestMessage(AmqpOutboundEndpoint.java:323)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:99)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
... 93 more
Caused by: java.io.IOException
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:106)
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:102)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:124)
at com.rabbitmq.client.impl.ChannelN.open(ChannelN.java:125)
at com.rabbitmq.client.impl.ChannelManager.createChannel(ChannelManager.java:134)
at com.rabbitmq.client.impl.AMQConnection.createChannel(AMQConnection.java:499)
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:44)
... 112 more
Caused by: com.rabbitmq.client.ShutdownSignalException: connection error
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:67)
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:33)
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:348)
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:221)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:118)
... 116 more
Caused by: com.rabbitmq.client.impl.UnknownChannelException: Unknown channel number 23364
at com.rabbitmq.client.impl.ChannelManager.getChannel(ChannelManager.java:80)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:552)
... 1 more
2016-03-04T03:42:05-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:5672 connection.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'xdbus.tap-s3.tap:stream:stream.batch-aws-s3-source.0' in vhost '/', class-id=50, method-id=20)
2016-03-04T03:53:13-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:5672 connection.CachingConnectionFactory - Channel shutdown: connection error
2016-03-04T03:53:13-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:5672 connection.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'xdbus.tap-s3.tap:stream:stream.batch-aws-s3-source.0' in vhost '/', class-id=50, method-id=20)
~
2016-03-04T02:57:54-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:8080 connection.CachingConnectionFactory - Channel shutdown: connection error
2016-03-04T02:57:55-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:8080 connection.CachingConnectionFactory - Channel shutdown: connection error
2016-03-04T03:42:04-0500 1.2.1.RELEASE ERROR AMQP Connection yyy:5672 connection.CachingConnectionFactory - Channel shutdown: connection error
Updated
I found the issue for message loses when this exception happens i see lot of msg lost.This pattern i tested multiple time.Everytime this exception happens i see msg lost.Also bumping up concurrency makes this issue to occur often.
2016-03-05T13:59:41-0500 1.2.1.RELEASE ERROR AMQP Connection host1:5672 connection.CachingConnectionFactory - Channel shutdown: connection error
rabbit configuration
spring:
rabbitmq:
addresses: host1:5672,host2:5672,host3:5672
adminAddresses: http://host1:15672,http://host2:15672,http://host3:15672
nodes: rabbit#host1.test.com,rabbit#host2.test.com,rabbit#host2.test.com
username: test
password: test
virtual_host: /
useSSL: false
sslProperties:
updated with increasing cache size to 200
I added xml provided by you and increased cache size to 200.This is the way happens when processing 1 million and 80 k messages.Only my http client concurrency is 100 all other is 1 .Slowly processing stopped msg are still there before http-client queue and same count.But msg count in my named channel slowly increasing like 10 msg per minute but its very slow
s3-poller|processor|http-client>queue:batchCacheQueue
Msg not getting decreass in queue before http 186174.But slowly msg are coming in to batchCacheQueue
Test case to simulate:
1)I was using spring integration aws-s3 source with a splitter in composite module | processor like xml parsing |http-client with concurrency 100 >named channel.
2)I think file source might also work.Create single file of million records and try to pull this from file.
3)After some 4 to 5 run we see this exception happening
Caused by: com.rabbitmq.client.impl.UnknownChannelException: Unknown channel number 23364
We found an issue when channels are churned a lot; you need to increase the channel cache size in the rabbit caching connection factory.
See this answer for a work-around.
I opened a JIRA issue so that the next version of Spring XD will expose this setting by in servers.yml so you don't have to override the bus configuration file.
I have inserted massively data into 2 nodes cassandra server. After 2 days I've found that the server went down with this error, And I can't guess the problem
FSReadError in /var/lib/cassandra/data/system/hints/system-hints-jb-1090-Data.db
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:95)
at org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:280)
at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:41)
at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1163)
at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:362)
at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMoreData(IndexedSliceReader.java:332)
at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:145)
at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:45)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:294)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1468)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1294)
at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:346)
at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:304)
at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:92)
at org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:525)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(Unknown Source)
at sun.nio.ch.FileChannelImpl.position(Unknown Source)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:101)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87)
... 29 more
Thanks for the anwser
My hunch: you have a bad disk or your disk space ran out. You could confirm by running some disk check tools on your nodes?