Packet for query to large. Setting max_allowed_packet - linux

When i connect to my databse via phpmyadmin everything works fine and it shows me the data.
When i am on my server i can see all data and work with it.
But when i try to connect to my server i'm getting this error:
com.mysql.cj.jdbc.exceptions.PacketTooBigException: Packet for query is too large (4.739.923 > 65.535). You can change this value on the server by setting the 'max_allowed_packet' variable.
im getting the max allowed packet size here:
Database changed
MariaDB [selforder]> SELECT ##max_allowed_packet;
+----------------------+
| ##max_allowed_packet |
+----------------------+
| 536870912 |
+----------------------+
1 row in set (0.00 sec)
and i changed my file nano /root/.my.cnf to this:
[mysqld]
max_allowed_packet=536870912
but i still get this error;

not really an answer but my problem was, that i installed multiple versions of mysql, i completly reinstalled my system

Related

graylog2 not showing any data

I'm new to Graylog2. I'm using it for analyze the stored logs from Elasticsearch.
I have done the setup successfully using this link http://www.richardyau.com/?p=377
But, I parsed the logs to elasticsearch under the index name called "xg-*". Not sure why same has not been replicated in graylog2.
when I check the indices status in graylogs2 web interface, it shows only "graylog2_0" index. Not showing my index.
someone please help me what is the reason behind it.
Elasticsearch indices details:
[root#xg bin]# curl http://localhost:9200/_cat/indices?pretty
green open graylog2_0 4 0 0 0 576b 576b
yellow open xg-2015.12.12 5 1 56 0 335.4kb 335.4kb
[root#xg bin]#
Graylog2 Web indices details:
Graylog doesn't support other indexing schemes than its own. If you want to use Graylog to analyze your data, you also have to ingest it through Graylog.

cassandra cqlsh <composed_ttl> error

I´m learning Cassandra CQL using CQL 3.1 documentation manual on mac with cassandra installed from homebrew (cqlsh 4.0.0 | Cassandra 2.0.0 | CQL spec 3.1.0 | Thrift protocol 19.37.0). From cqlsh, when I enter collections map example number 7:
UPDATE users USING TTL <computed_ttl> SET todo['2012-10-1'] = 'find water' WHERE user_id = 'frodo';
I´m getting this error:
Bad Request: line 1:22 no viable alternative at input '<'
So, docs are wrong or am I doing something wrong?
You need to replace <computed_ttl> with an actual TTL e.g.
UPDATE users USING TTL 100 SET todo['2012-10-1'] = 'find water' WHERE user_id = 'frodo';
which would cause the value to expire after 100 seconds.

RPC timeout error while exporting data from CQL

I am trying to export data from cassandra using CQL client. A column family has about 100000 rows in it. when i am copying dta into csv file using COPY TO command i get following rpc_time out error.
copy mycolfamily to '/root/mycolfamily.csv'
Request did not complete within rpc_timeout.
I am running in:
[cqlsh 3.1.6 | Cassandra 1.2.8 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
How can I increase RPC timeout limit?
I tried adding rpc_timeout_in_ms: 20000 (defalut is 10000) in my conf/cassandra.yaml file. but while restarting cassandra I get:
[root#user ~]# null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=rpc_timeout_in_ms for JavaBean=org.apache.cassandra.config.Config#71bfc4fc; Unable to find property 'rpc_timeout_in_ms' on class: org.apache.cassandra.config.Config
Invalid yaml; unable to start server. See log for stacktrace.
The COPY command currently does the same thing with SELECT with LIMIT 99999999. So, it will eventually goes to timeout while your data is growing. Here's the export function;
https://github.com/apache/cassandra/blob/trunk/bin/cqlsh#L1524
I'm doing the same export on production. What I'm doing is the following;
make select * from table where timeuuid = someTimeuuid limit 10000
write the result set to a csv file w/ >> mode
make the next selects with respect to the last timeuuid
You can pipe command in cqlsh by the following cqlsh command
echo "{$cql}" | /usr/bin/cqlsh -u user -p password localhost 9160 > file.csv
You can use Auto pagination by specifying fetch size in Datastax Java driver.
Statement stmt = new SimpleStatement("SELECT id FROM mycolfamily;");
stmt.setFetchSize(500);
session.execute(stmt);
for (Row r:result.all()){
//write to file
}
I have encountered the same problem a few minutes ago then I have found CAPTURE and it worked:
First start capturing on cqlsh and then run your query with some limiting of your choice.
http://www.datastax.com/documentation/cql/3.0/cql/cql_reference/capture_r.html
The best way yo export the data is using nodetool snapshot option. This returns immediately and can be restored later on. The only issue is that this export is per node and for the entire cluster.
Example:
nodetool -h localhost -p 7199 snapshot
See reference:
http://docs.datastax.com/en/archived/cassandra/1.1/docs/backup_restore.html#taking-a-snapshot

How to make mariadb 10 make full use of a multi-core processor?

I'm using mariadb 10,in the official document only mentioned the 5.x version.I tried using thread_pool_min_threads but didn't work at all,It only shutdown and left a message said "Unknown variable" in the event log.
You did not mention your platform. thread_pool_min_threads does not exist on Unix variations (as also pointed out in the official document). It is Windows-only variable.
If I list threadpool related variables on 10.0.3, on Windows, I get
mysql> show variables like 'thread_po%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| thread_pool_max_threads | 500 |
| thread_pool_min_threads | 1 |
+-------------------------+-------+

IP to ASN mapping algorithm

Is there no easy way to map a given IP adress to the corresponding ASN number? For example:
ping to find out the IP adress:
$ ping www.switch.ch
PING aslan.switch.ch (130.59.108.36) 56(84) bytes of data.
whois lookup for the ASN number:
$ whois -h whois.cymru.com -v 130.59.108.36
Warning: RIPE flags used with a traditional server.
AS | IP | BGP Prefix | CC | Registry | Allocated | AS Name
559 | 130.59.108.36 | 130.59.0.0/16 | CH | ripencc | 1993-09-22 | SWITCH SWITCH, Swiss Education and Research Network
So the mapping in this case would be 130.59.108.36 (IP)-> 559 (ASN). Easy. But what if I would like to create my own local mapping service with the public available information from the Regional Internet Registries? So, for the above example, it would be this list, right?
ftp://ftp.ripe.net/pub/stats/ripencc/delegated-ripencc-latest
And to find the matching entrie is also not a problem:
ripencc|CH|ipv4|130.59.0.0|65536|19930922|assigned
But how do I get the ASN number from the line above?? How are those two informations linked together?
ripencc|EU|asn|559|1|19930901|allocated
Thanks in advance for a reply!
I explain how to do this here: https://www.quaxio.com/bgp/ (formerly at https://alokmenghrajani.github.io/bgp/)
It basically involves downloading a dump from a router and then using an efficient data representation to map an IP address to a netmask.
I'd propose doing this based on MRT dumps collected from an actual BGP speaker.
There is this python library f.e. that can be used to easily parse MRT dumps: http://code.google.com/p/pyasn/
If you're not able to run your own BGP speaker, you can download dumps at http://archive.routeviews.org/
Make sure you checkout their other stuff too. They provide also DNS zonefiles that would enable you to do such lookups using a standard DNS server such as Bind or NSD: http://archive.routeviews.org/dnszones/
I hope that gets you started...
I have made a tool that appends ASNs to HTTP log lines. I explain how to build the database off RIPE raw data and use it with binary search. Also, C code is ready for use. 1.6M look-ups in a few seconds on a regular virtual instance:
https://github.com/psvz/tirexASN

Resources