How to increase YSQL timeout in YugabyteDB - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
Running 2.7.1.1 with 3 masters and 5 tservers.
Documentation shows ysql_client_read_write_timeout_ms overrides the client_read_write_timeout_ms gflag. Default for ysql_client_read_write_timeout_ms=-1 but can't find what -1 means? Dont override value I am guessing. The client_read_write_timeout_ms=60000.
Anyways, tserver are showing timeout at 60s despite if I change ysql_client_read_write_timeout_ms=300000 and restart tservers.
A simple test:
ysqlsh -h x -Ux -d x -c"select count(*) from x.table;"
ERROR: Timed out: [Timed out (yb/rpc/outbound_call.cc:512): Read RPC (request call id 158) to x:9100 timed out after 59.996s, Timed out (yb/rpc/outbound_call.cc:512): Read RPC (request call id 160) to x:9100 timed out after 59.996s, Timed out (yb/rpc/outbound_call.cc:512): Read RPC (request call id 164) to x:9100 timed out after 59.996s, Timed out (yb/rpc/outbound_call.cc:512): Read RPC (request call id 165) to x:9100 timed out after 59.996s, Timed out (yb/rpc/outbound_call.cc:512): Read RPC (request call id 166) to x:9100 timed out after 59.996s]
Tested values on all tservers and masters using curl to verify:
curl -s http://x:9000/varz | grep client_read_write_timeout_ms
--client_read_write_timeout_ms=60000
--ysql_client_read_write_timeout_ms=300000

Looking at the code in C++:
const auto default_client_timeout_ms =
(FLAGS_ysql_client_read_write_timeout_ms < 0
? std::max(FLAGS_client_read_write_timeout_ms, 600000)
: FLAGS_ysql_client_read_write_timeout_ms);
It means max of --client_read_write_timeout_ms and 600000ms. And, it looks like that code-path is only called when statement_timeout session variable is assigned. The common path is BuildSession, using pg_yb_session_timeout_ms flag. Setting --pg_yb_session_timeout_ms=300000 in yb-tservers and restarting them should make it work.

Related

PYODBC connection going to sleep mode

I am trying to execute stored procedure from data bricks by using PYODBC connection, after all transactions happened, status is going to sleep mode. Please help me on that,
I tried all the possibilities auto-commit and connection timeout etc. but nothing is working.
import pyodbc
import datetime
username = "usrname"
password = "password"
server = "server"
database_name = "dbname"
port = "1433"
conn=pyodbc.connect('Driver={ODBC Driver 17 for SQL server};SERVER=tcp:'+server+','+port+';DATABASE='+ database_name +';UID='+ username +';PWD='+ password)
#conn.timeout = 600
cursor=conn.cursor()
# conn.autocommit = True
sql = "set nocount on; exec proc_name"
print("Connection Started at "+str(datetime.datetime.now()))
cursor.execute(sql)
print("Connection closed at "+str(datetime.datetime.now()))
conn.commit()
cursor.close()
conn.close()
print(datetime.datetime.now())
Notebook is still in Running process please have a look on below pic
Status from database for that SPID initially status is RUNNABLE once all transaction got completed from proc(insertion, delete, update of data) status is updating as sleeping mode
because of this sleeping status data bricks note book is not completing, it's keep on running. please help me out from issue. Thanks in Advance.
Blocking caused by a sleeping SPID that has an uncommitted transaction:
This type of blocking can often be identified by a SPID that is sleeping or awaiting a command, yet whose transaction nesting level (##TRANCOUNT, open_transaction_count from sys.dm_exec_requests) is greater than zero.
This can occur if the application experiences a query timeout, or issues a cancel without also issuing the required number of ROLLBACK and/or COMMIT statements. When a SPID receives a query timeout or a cancel, it will terminate the current query and batch, but does not automatically roll back or commit the transaction.
The application is responsible for this, as SQL Server cannot assume that an entire transaction must be rolled back due to a single query being canceled. The query timeout or cancel will appear as an ATTENTION signal event for the SPID in the Extended Event session.
Refer - https://learn.microsoft.com/en-us/troubleshoot/sql/performance/understand-resolve-blocking#detailed-blocking-scenarios

Connecting Eclipse Hono to Ditto - "description":"Check if all required JSON fields were set."},"status":400}" Error

I was successfully able to connect Hono to Ditto using AMQP adapters and I got the following messages in the log. The value sent from the demo device registered in Hono is successfully received and updated in the Ditto thing.
connectivity_1_ad306c4c315b | 2019-07-08 21:12:05,434 INFO [ID:AMQP_NO_PREFIX:TelemetrySenderImpl-35] o.e.d.s.c.m.a.AmqpPublisherActor akka://ditto-cluster/system/sharding/connection/1/Insight-connection-1/pa/$a/c1/amqpPublisherActor2 - Response dropped, missing replyTo address: UnmodifiableExternalMessage [headers={orig_adapter=hono-http, device_id=4716, correlation-id=ID:AMQP_NO_PREFIX:TelemetrySenderImpl-35, content-type=application/vnd.eclipse.ditto+json, etag="hash:18694a24", orig_address=/telemetry, source=nginx:ditto}, response=true, error=false, authorizationContext=null, topicPath=ImmutableTopicPath [namespace=org.eclipse.ditto, id=4716, group=things, channel=twin, criterion=commands, action=modify, subject=null, path=org.eclipse.ditto/4716/things/twin/commands/modify], enforcement=null, headerMapping=null, sourceAddress=null, payloadType=TEXT, textPayload={"topic":"org.eclipse.ditto/4716/things/twin/commands/modify","headers":{"orig_adapter":"hono-http","device_id":"4716","correlation-id":"ID:AMQP_NO_PREFIX:TelemetrySenderImpl-35","content-type":"application/vnd.eclipse.ditto+json","etag":"\"hash:18694a24\"","orig_address":"/telemetry","source":"nginx:ditto"},"path":"/features","value":null,"status":204}, bytePayload=null']
things-search_1_8f2ad3dda4bf | 2019-07-08 21:12:05,593 INFO [] o.e.d.s.t.p.w.s.EnforcementFlow - Updating search index of <1> things
things-search_1_8f2ad3dda4bf | 2019-07-08 21:12:05,598 INFO [] o.e.d.s.t.p.w.s.EnforcementFlow - Got SudoRetrieveThingResponse <1> times
things-search_1_8f2ad3dda4bf | 2019-07-08 21:12:05,725 INFO [] a.s.Materializer akka.stream.Log(akka://ditto-cluster/user/thingsSearchRoot/searchUpdaterRoot/StreamSupervisor-21) - [SearchUpdaterStream/BulkWriteResult] Element: BulkWriteResult[matched=1,upserts=0,inserted=0,modified=1,deleted=0]
But when I tried to make a new connection (Hono - installed in a different server and ditto hosted in same server where the above successful connection is made). Connection is established and also when I try to send the messages from the demo devices registered in Hono to Ditto. I get the following response.
vigkam#srvgal89:~$ curl -X POST -i -u sensor0101#tenantAdapters:mylittle -H 'Content-Type: application/json' -d '{"temp": 23.09, "hum": 45.85}' http://srvgal89.deri.ie:8080/telemetry
HTTP/1.1 202 Accepted
content-length: 0
And when I try to retrieve connection metrices, I can see the increase in the metric count with respect to the number of messages sent from Hono.
But the only problem is the sensor values (temp and Humidity as in the above curl command) are not getting updated in the ditto thing.
I got the below error message in the log which says "description":"Check if all required JSON fields were set."},"status":400}"
connectivity_1_ad306c4c315b | 2019-07-08 21:34:17,640 INFO [ID:AMQP_NO_PREFIX:TelemetrySenderImpl-13] o.e.d.s.c.m.a.AmqpPublisherActor akka://ditto-cluster/system/sharding/connection/23/Gal-Connection-10/pa/$a/c1/amqpPublisherActor2 - Response dropped, missing replyTo address: UnmodifiableExternalMessage [headers={content-type=application/vnd.eclipse.ditto+json, orig_adapter=hono-http, orig_address=/telemetry, device_id=4816, correlation-id=ID:AMQP_NO_PREFIX:TelemetrySenderImpl-13}, response=true, error=true, authorizationContext=null, topicPath=ImmutableTopicPath [namespace=unknown, id=unknown, group=things, channel=twin, criterion=errors, action=null, subject=null, path=unknown/unknown/things/twin/errors], enforcement=null, headerMapping=null, sourceAddress=null, payloadType=TEXT, textPayload={"topic":"unknown/unknown/things/twin/errors","headers":{"content-type":"application/vnd.eclipse.ditto+json","orig_adapter":"hono-http","orig_address":"/telemetry","device_id":"4816","correlation-id":"ID:AMQP_NO_PREFIX:TelemetrySenderImpl-13"},"path":"/","value":{"status":400,"error":"json.field.missing","message":"JSON did not include required </path> field!","description":"Check if all required JSON fields were set."},"status":400}, bytePayload=null']
Please let me know if I am missing something. Thank you in advance.!!
More Information :
The thingId in Ditto is org.eclipse.ditto:4816,
Tenant Id in Hono - tenantAdapters,
Device Registered in Hono - 4816 (tenantAdapters),
Auth Id of the device - sensor0101,
ConnectionId between Hono and Ditto - Gal-Connection-10
probably you are getting this failure since Ditto can't parse non ditto protocol messages. From reading your logs, I think your Ditto thing currently looks like this:
{
"thingId": "org.eclipse.ditto:4716",
"features": null
}
You could verify this by using a GET request to http://<your-ditto-address>:<your-ditto-gateway-port>/api/2/things/org.eclipse.ditto:4716.
Since you probably want to store the temperature and humidity to a feature of your thing, it would be best to not have the features as null, but already provide a feature with an ID for the value. Do this by creating a feature, e.g. with id 'environment' via a PUT to http://<your-ditto-address>:<your-ditto-gateway-port>/api/2/things/org.eclipse.ditto:4716/features/environment and content {}. Afterwards your thing should probably look like this:
{
"thingId": "org.eclipse.ditto:4716",
"features": {
"environment": {}
}
}
Now back to your initial question: Ditto will only understand ditto protocol messages and therefore doesn't know what to do with your JSON object.
To solve this problem you have two options:
1. adding a payload mapping script for incoming messages to your connection.
2. publishing a ditto protocol message instead of the simple JSON object. This would then look something like this:
vigkam#srvgal89:~$ curl -X POST -i -u sensor0101#tenantAdapters:mylittle -H 'Content-Type: application/json' -d '{ "topic": "org.eclipse.ditto/4716/things/twin/commands/modify", "path": "/features/environment", "value": {"temp": 23.09, "hum": 45.85} }' http://srvgal89.deri.ie:8080/telemetry
Note that I have specified the path /features/environment which will update the value of the environment feature of your thing.
Messages processed by Eclipse Ditto via AMQP (e.g. Hono) must be in the so called Ditto Protocol being a JSON based protocol which contains apart from other JSON fields the path field which is missing in your JSON (hence the error message "JSON did not include required </path> field!").
So you have at least 2 options to proceed:
Instead of your JSON format {"temp": 23.09, "hum": 45.85} send a message in Ditto Protocol, e.g. have a look here for an example
Use the Payload mapping feature of Ditto in order to specify a JavaScript function to invoke on all incoming messages from Hono in order to transform them into a valid Ditto Protocol message

spring xd losing messages when processing huge volume

I am using spring xd My stream looks like below and running tests on 3 node container with 1 admin node with rabbit as transport
aws-s3|processor1|http-client|processor2>queue:readyQueue
I have created below tap.
tap1 aws-s3>s3Queue
tap2 processor1>processorQueue1
tap3 http-client>httpQueue
I run below scenarios in my tests:
Scenario1: 5 files of 200k =1 Million records
concurrency of http-client=70 and processor2=30
I see 900k message s3Queue
I see 889k message processorQueue1
I see 886k message httpQueue
I see 883k message processorQueue2
Messages are lost everywhere and its random
Scenario2:
5 files of 200k =1 Million records and all module concurrency=1
I see 998800 message s3Queue
I see 998760 message processorQueue1
I see 997540 message httpQueue
I see 997530 message processorQueue2
Even this number is random and not consistent
Scenario3
I changed stream as below and concurrency=1 and 5 files of 200k =1 Million records
aws-s3 >testQueue
I get all my messages I run 3 times and no issues.I get all my 1 million messages
scenario4
I changed stream as below and concurrency=1 5 files of 200k =1 Million records
aws-s3 |processor1 >testQueue2
I get all my messages I run 3 times and no issues.I get all my 1 million messages
In scenario4 and scenarion 3 data ingestion is faster and it took 5 min to process 5 million faster and ingestion was faster in rabbit transport queue like 5k msg per sec
In scenario 1 data ingestion was slower even s3 module was pulling the data very slow like 300 to 1000 msg per sec
In scenario 2 s3 pulled data faster but http client was slow like 100 msg per sec but aws-s3 pulled data fast like 3-4k msg per sec.
I am thinking like seeing xd threading is causing issues and i am losing messages.Please can you help me how to solve this issue.
update
Scenario 5
I changed reply-timeout to -1 in http client and then
I lost only 37 msgs
Now again I run 2nd iteration I lost 25000 msgs i see the bellowing containers log when that happened
2016-03-04T03:42:04-0500 1.2.1.RELEASE ERROR task-scheduler-7 handler.LoggingHandler - org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint#b6700b1]; nested exception is org.springframework.amqp.AmqpIOException: java.io.IOException
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:84)
at org.springframework.xd.dirt.integration.rabbit.RabbitMessageBus$SendingHandler.handleMessageInternal(RabbitMessageBus.java:891)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:287)
at org.springframework.integration.channel.interceptor.WireTap.preSend(WireTap.java:129)
at org.springframework.integration.channel.AbstractMessageChannel$ChannelInterceptorList.preSend(AbstractMessageChannel.java:392)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:282)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:245)
at sun.reflect.GeneratedMethodAccessor204.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.integration.monitor.DirectChannelMetrics.monitorSend(DirectChannelMetrics.java:114)
at org.springframework.integration.monitor.DirectChannelMetrics.doInvoke(DirectChannelMetrics.java:98)
at org.springframework.integration.monitor.DirectChannelMetrics.invoke(DirectChannelMetrics.java:92)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy1537.send(Unknown Source)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:231)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:154)
at org.springframework.integration.splitter.AbstractMessageSplitter.produceOutput(AbstractMessageSplitter.java:157)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:102)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:105)
Caused by: org.springframework.amqp.AmqpIOException: java.io.IOException
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:63)
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:51)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createBareChannel(CachingConnectionFactory.java:758)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.access$300(CachingConnectionFactory.java:747)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.doCreateBareChannel(CachingConnectionFactory.java:419)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createBareChannel(CachingConnectionFactory.java:395)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getCachedChannelProxy(CachingConnectionFactory.java:364)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.getChannel(CachingConnectionFactory.java:357)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.access$1100(CachingConnectionFactory.java:75)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$ChannelCachingConnectionProxy.createChannel(CachingConnectionFactory.java:763)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$1.createChannel(ConnectionFactoryUtils.java:85)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.doGetTransactionalResourceHolder(ConnectionFactoryUtils.java:134)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactoryUtils.java:67)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:1035)
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:1028)
at org.springframework.amqp.rabbit.core.RabbitTemplate.send(RabbitTemplate.java:540)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:635)
at org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint.send(AmqpOutboundEndpoint.java:331)
at org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint.handleRequestMessage(AmqpOutboundEndpoint.java:323)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:99)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
... 93 more
Caused by: java.io.IOException
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:106)
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:102)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:124)
at com.rabbitmq.client.impl.ChannelN.open(ChannelN.java:125)
at com.rabbitmq.client.impl.ChannelManager.createChannel(ChannelManager.java:134)
at com.rabbitmq.client.impl.AMQConnection.createChannel(AMQConnection.java:499)
at org.springframework.amqp.rabbit.connection.SimpleConnection.createChannel(SimpleConnection.java:44)
... 112 more
Caused by: com.rabbitmq.client.ShutdownSignalException: connection error
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:67)
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:33)
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:348)
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:221)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:118)
... 116 more
Caused by: com.rabbitmq.client.impl.UnknownChannelException: Unknown channel number 23364
at com.rabbitmq.client.impl.ChannelManager.getChannel(ChannelManager.java:80)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:552)
... 1 more
2016-03-04T03:42:05-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:5672 connection.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'xdbus.tap-s3.tap:stream:stream.batch-aws-s3-source.0' in vhost '/', class-id=50, method-id=20)
2016-03-04T03:53:13-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:5672 connection.CachingConnectionFactory - Channel shutdown: connection error
2016-03-04T03:53:13-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:5672 connection.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'xdbus.tap-s3.tap:stream:stream.batch-aws-s3-source.0' in vhost '/', class-id=50, method-id=20)
~
2016-03-04T02:57:54-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:8080 connection.CachingConnectionFactory - Channel shutdown: connection error
2016-03-04T02:57:55-0500 1.2.1.RELEASE ERROR AMQP Connection xxx:8080 connection.CachingConnectionFactory - Channel shutdown: connection error
2016-03-04T03:42:04-0500 1.2.1.RELEASE ERROR AMQP Connection yyy:5672 connection.CachingConnectionFactory - Channel shutdown: connection error
Updated
I found the issue for message loses when this exception happens i see lot of msg lost.This pattern i tested multiple time.Everytime this exception happens i see msg lost.Also bumping up concurrency makes this issue to occur often.
2016-03-05T13:59:41-0500 1.2.1.RELEASE ERROR AMQP Connection host1:5672 connection.CachingConnectionFactory - Channel shutdown: connection error
rabbit configuration
spring:
rabbitmq:
addresses: host1:5672,host2:5672,host3:5672
adminAddresses: http://host1:15672,http://host2:15672,http://host3:15672
nodes: rabbit#host1.test.com,rabbit#host2.test.com,rabbit#host2.test.com
username: test
password: test
virtual_host: /
useSSL: false
sslProperties:
updated with increasing cache size to 200
I added xml provided by you and increased cache size to 200.This is the way happens when processing 1 million and 80 k messages.Only my http client concurrency is 100 all other is 1 .Slowly processing stopped msg are still there before http-client queue and same count.But msg count in my named channel slowly increasing like 10 msg per minute but its very slow
s3-poller|processor|http-client>queue:batchCacheQueue
Msg not getting decreass in queue before http 186174.But slowly msg are coming in to batchCacheQueue
Test case to simulate:
1)I was using spring integration aws-s3 source with a splitter in composite module | processor like xml parsing |http-client with concurrency 100 >named channel.
2)I think file source might also work.Create single file of million records and try to pull this from file.
3)After some 4 to 5 run we see this exception happening
Caused by: com.rabbitmq.client.impl.UnknownChannelException: Unknown channel number 23364
We found an issue when channels are churned a lot; you need to increase the channel cache size in the rabbit caching connection factory.
See this answer for a work-around.
I opened a JIRA issue so that the next version of Spring XD will expose this setting by in servers.yml so you don't have to override the bus configuration file.

jtds TDS Protocol error: Invalid packet type

I'm connecting to Sybase ASA v11.0.1 using the jTDS library (v1.2.6) and I'm getting the following error every time I try to return varchar data
Protocol error: Invalid packet type 0x0
(or x4 or x7)
The queries work fine when I return a timestamp or numeric value. Any idea what is causing this error or how to resolve it?
It seems Sybase is not supported by 1.2.6 according to this https://sourceforge.net/p/jtds/discussion/104389/thread/d6e2efe3/
But I see this sometimes when the timeout closes the connection while I read the result set

Could not connect to localhost:9160 with phpcassa

im having such problem: phpcassa causes such exception when the load increases to 200 queries to script per second
Error connecting to localhost:9160: TException: TSocket: Could not connect to localhost:9160 (Cannot assign requested address [99])
Error connecting to localhost:9160: TException: TSocket: Could not connect to localhost:9160 (Cannot assign requested address [99])
PHP Fatal error: Uncaught exception 'NoServerAvailable' with message 'An attempt was made to connect to every server twice, but all attempts failed. The last error was: TException:TSocket: Could not connect to localhost:9160 (Cannot assign requested address [99])' in /var/www/megaumnik/context/connection.php:232
Stack trace:
#0 /var/www/megaumnik/context/connection.php(257): ConnectionPool->make_conn()
#1 /var/www/megaumnik/context/connection.php(351): ConnectionPool->get()
#2 /var/www/megaumnik/context/connection.php(286): ConnectionPool->call('describe_keyspa...', 'thegame')
#3 /var/www/megaumnik/context/columnfamily.php(194): ConnectionPool->describe_keyspace()
#4 /var/www/megaumnik/data/getData.class.php(265): ColumnFamily->__construct(Object(ConnectionPool), 'username')
#5 /var/www/megaumnik/data/test.php(6): getData->getDataByKey('username', '317')
#6 {main}
thrown in /var/www/megaumnik/context/connection.php on line 232
script has 4 $cf->get() from different column families.each column family has 1000 rows
It sounds like maybe you are hitting the open file limit. You can see what the current limit is with 'ulimit -a'.
To increase the limit, you can set a new limit in one of two ways. First, you can do something like 'ulimit -n 10000', which is temporary, and will only affect processes started by that shell. To permanently increase the limit, you need to add a line to /etc/security/limits.conf that looks like this:
* - nofile 10000
For this to take effect, I believe you need to log in again.

Resources