Could not connect to localhost:9160 with phpcassa - cassandra

im having such problem: phpcassa causes such exception when the load increases to 200 queries to script per second
Error connecting to localhost:9160: TException: TSocket: Could not connect to localhost:9160 (Cannot assign requested address [99])
Error connecting to localhost:9160: TException: TSocket: Could not connect to localhost:9160 (Cannot assign requested address [99])
PHP Fatal error: Uncaught exception 'NoServerAvailable' with message 'An attempt was made to connect to every server twice, but all attempts failed. The last error was: TException:TSocket: Could not connect to localhost:9160 (Cannot assign requested address [99])' in /var/www/megaumnik/context/connection.php:232
Stack trace:
#0 /var/www/megaumnik/context/connection.php(257): ConnectionPool->make_conn()
#1 /var/www/megaumnik/context/connection.php(351): ConnectionPool->get()
#2 /var/www/megaumnik/context/connection.php(286): ConnectionPool->call('describe_keyspa...', 'thegame')
#3 /var/www/megaumnik/context/columnfamily.php(194): ConnectionPool->describe_keyspace()
#4 /var/www/megaumnik/data/getData.class.php(265): ColumnFamily->__construct(Object(ConnectionPool), 'username')
#5 /var/www/megaumnik/data/test.php(6): getData->getDataByKey('username', '317')
#6 {main}
thrown in /var/www/megaumnik/context/connection.php on line 232
script has 4 $cf->get() from different column families.each column family has 1000 rows

It sounds like maybe you are hitting the open file limit. You can see what the current limit is with 'ulimit -a'.
To increase the limit, you can set a new limit in one of two ways. First, you can do something like 'ulimit -n 10000', which is temporary, and will only affect processes started by that shell. To permanently increase the limit, you need to add a line to /etc/security/limits.conf that looks like this:
* - nofile 10000
For this to take effect, I believe you need to log in again.

Related

What does the name between [] printed using log interface of org.apache.commons.logging mean in a grails app?

I am trying to debug a problem in a grails application and I see in log:
[http-nio-8180-exec-19] ERROR org.hibernate.internal.SessionImpl - HHH000346: Error during managed flush [Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1]. I want to know what http-nio-8180-exec-19 stands for and if I can use this thread name to track what user did the operation that lead to the exception.
Can I assume that on thread http-nio-8180-exec-19 I will see all the operations done by just an user and each user that will log into the application will have a different thread associated?
By default I believe those are the names of threads, though you could put whatever you want there. If you have something like [%15.15t] in your logback.groovy, that is what is causing the thread name to be inserted there.

ORA-01555: snapshot too old: rollback segment number with name “” too small Sonar qube

i am getting error while publishing results on sonar.
Error querying database. Cause: org.apache.ibatis.executor.result.ResultMapException: Error attempting to get column 'RAWLINEHASHES' from result set. Cause: java.sql.SQLException: ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2_111974964$" too small
Cause: org.apache.ibatis.executor.result.ResultMapException: Error attempting to get column 'RAWLINEHASHES' from result set. Cause: java.sql.SQLException: ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2_111974964$" too small
Pipeline executed for 2 hr 30 mins.
Can you please help ?
The error that you are getting is ORA-01555. Which is an Oracle error message.
Your pipeline is executing something against an Oracle database, which after it has run for a long time, gives the error.
For ways to avoid this error see: https://blog.enmotech.com/2018/09/10/ora-01555-snapshot-old-error-ways-to-avoid-ora-01555-snapshot-too-old-error/

Logstash 6.2 - full persistent queue (wrong mapping?)

My queue is almost full and I see this errors in my log file:
[2018-05-16T00:01:33,334][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"2018.05.15-el-mg_papi-prod", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x608d85c1>], :response=>{"index"=>{"_index"=>"2018.05.15-el-mg_papi-prod", "_type"=>"doc", "_id"=>"mHvSZWMB8oeeM9BTo0V2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [papi_request_json.query.disableFacets]", "caused_by"=>{"type"=>"i_o_exception", "reason"=>"Current token (VALUE_TRUE) not numeric, can not use numeric value accessors\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper#56b8442f; line: 1, column: 555]"}}}}}
[2018-05-16T00:01:37,145][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2018-05-16T00:01:37,147][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
...
[2018-05-16T15:28:09,981][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
If I understand first warning, problem is with mapping. I have a lot of files in my queue Logstash folder. My questions is:
How to empty my queue, can i just delete all files from logstash queue folder(all logs will be lost)? And then resend all the data to logstash to proper index?
How can I determine where exactly is problem in mapping, or which servers sending data of wrong type?
I have pipeline on port 5000 named testing-pipeline just for checking if Logstash is active from nagios. What is that [INFO ][org.logstash.beats.BeatsHandler] logs?
If I understand correctly, [INFO ][logstash.outputs.elasticsearch] is just logs about retrying to process logstash queue?
On all servers is FIlebeat 6.2.2. Thank you for your help.
All pages in queue could be deleted but it is not the proper solution. In my case, the queue was full because there was events with different mapping of index. In Elasticsearch 6, you cannot send documents with different mapping to the same index so the logs stacked in queue because of this logs (even if there is only one wrong event, all others will not be processed). So how to process all data you can process an skip the wrong one? Solution is to configure DLQ (dead letter queue). Every event with response code 400 or 404 is moved to DLQ so others could be process. The data from DLQ can be processed later with pipeline.
Wrong mapping can be determined by error log "error"=>{"type"=>"mapper_parsing_exception", ..... }. To specify exact place with wrong mapping, you have to compare mapping of events and indices.
[INFO ][org.logstash.beats.BeatsHandler] was caused by Nagios server. The check did not consist of valid request, that's why the Handling exception. The check should test if Logstash service is active. Now I check Logstas service on localhost:9600, for more info here.
[INFO ][logstash.outputs.elasticsearch] means that Logstash trying to process the queue but index is locked ([FORBIDDEN/12/index read-only / allow delete (api)]) because the indices was set to read-only state. Elasticsearch, when there is not enough space on server, automatically configure indices to read-only. This can be change by cluster.routing.allocation.disk.watermark.low, for more info here.

phpcassa creating column family

I get a very strange error when creating column family with phpcassa, here is my code:
$sys = new SystemManager("127.0.0.1:9160");
$attr = array("comparator" => "UTF8Type");
$data = $sys->create_column_family("my_key_space", "user_likes", $attr);
So i'm not actually sure if it's a valid code, but i am quite sure it is, so this is the error i get:
TTransportException [ 0 ]: TSocket: timed out reading 4 bytes from 127.0.0.1:9160
And i get this error after a really long loading, maybe 30-60 secs, but any other code like retrieving or inserting data works perfectly, so what could it be?
I believe the attribute name should be "comparator_type" instead of "comparator".
As for why the server isn't responding, you'll probably find an Exception or stack trace in your Cassandra logs. If you're using an up-to-date version of Cassandra (like 1.1.5 or 1.1.6), I suggest opening a ticket in the Cassandra JIRA, because it should be returning an error instead of timing out.

jtds TDS Protocol error: Invalid packet type

I'm connecting to Sybase ASA v11.0.1 using the jTDS library (v1.2.6) and I'm getting the following error every time I try to return varchar data
Protocol error: Invalid packet type 0x0
(or x4 or x7)
The queries work fine when I return a timestamp or numeric value. Any idea what is causing this error or how to resolve it?
It seems Sybase is not supported by 1.2.6 according to this https://sourceforge.net/p/jtds/discussion/104389/thread/d6e2efe3/
But I see this sometimes when the timeout closes the connection while I read the result set

Resources