Related
faced with a problem, we have cluster of 5 nodes after restart dse trying to start without success the last record in system.log is below...
Tried with heap and 48 and 64, node has 128GB. Three of them started but these two cannot, no error in the log just that record.
INFO [main] 2017-05-16 21:16:27,507 CassandraDaemon.java:487 - JVM Arguments: [-Ddse.server_process, -XX:+AlwaysPreTouch, -Dcassandra.disable_auth_caches_remote_configuration=false, -Dcassandra.force_default_indexing_page_size=false, -Dcassandra.join_ring=true, -Dcassandra.load_ring_state=true, -Dcassandra.write_survey=false, -XX:CMSInitiatingOccupancyFraction=75, -XX:CMSWaitDuration=10000, -ea, -XX:G1RSetUpdatingPauseTimePercent=5, -XX:+HeapDumpOnOutOfMemoryError, -Xms16G, -Djava.net.preferIPv4Stack=true, -XX:MaxGCPauseMillis=500, -Xmx16G, -XX:MaxTenuringThreshold=1, -Xss256k, -XX:+PerfDisableSharedMem, -XX:+ResizeTLAB, -XX:StringTableSize=1000003, -XX:SurvivorRatio=8, -XX:ThreadPriorityPolicy=42, -XX:+UseThreadPriorities, -XX:+UseTLAB, -XX:+UseG1GC, -Dcom.sun.management.jmxremote.authenticate=false, -Dcassandra.jmx.local.port=7199, -XX:CompileCommandFile=/etc/dse/cassandra/hotspot_compiler, -javaagent:/usr/share/dse/cassandra/lib/jamm-0.3.0.jar, -Djava.library.path=/usr/share/dse/hadoop2-client/lib/native:/usr/share/dse/cassandra/lib/sigar-bin:/usr/share/dse/hadoop2-client/lib/native:/usr/share/dse/cassandra/lib/sigar-bin:, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/var/log/cassandra, -Dcassandra.storagedir=/usr/share/dse/data, -Dcassandra-pidfile=/var/run/dse/dse.pid, -Dgraph-enabled=true, -XX:HeapDumpPath=/var/lib/cassandra/java_1494958565.hprof, -XX:ErrorFile=/var/lib/cassandra/hs_err_1494958565.log, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config
I'm using the spark-cloudant library 1.6.3 that is installed by default with the spark service.
I'm trying to save some data to Cloudant:
val df = getTopXRecommendationsForAllUsers().toDF.filter( $"_1" > 6035)
println(s"Saving ${df.count()} ratings to Cloudant: " + new Date())
println(df.show(5))
val timestamp: Long = System.currentTimeMillis / 1000
val dbName: String = s"${destDB.database}_${timestamp}"
df.write.mode("append").json(s"${dbName}.json")
val dfWriter = df.write.format("com.cloudant.spark")
dfWriter.option("cloudant.host", destDB.host)
if (destDB.username.isDefined && destDB.username.get.nonEmpty) dfWriter.option("cloudant.username", destDB.username.get)
if (destDB.password.isDefined && destDB.password.get.nonEmpty) dfWriter.option("cloudant.password", destDB.password.get)
dfWriter.save(dbName)
However, I hit the error:
Starting getTopXRecommendationsForAllUsers: Sat Dec 24 08:50:11 CST 2016
Finished getTopXRecommendationsForAllUsers: Sat Dec 24 08:50:11 CST 2016
Saving 6 ratings to Cloudant: Sat Dec 24 08:50:17 CST 2016
+----+--------------------+
| _1| _2|
+----+--------------------+
|6036|[[6036,2503,4.395...|
|6037|[[6037,572,4.5785...|
|6038|[[6038,1696,4.894...|
|6039|[[6039,572,4.6854...|
|6040|[[6040,670,4.6820...|
+----+--------------------+
only showing top 5 rows
()
Use connectorVersion=1.6.3, dbName=recommendationdb_1482591017, indexName=null, viewName=null,jsonstore.rdd.partitions=5, + jsonstore.rdd.maxInPartition=-1,jsonstore.rdd.minInPartition=10, jsonstore.rdd.requestTimeout=900000,bulkSize=20, schemaSampleSize=1
Name: org.apache.spark.SparkException
Message: Job aborted due to stage failure: Task 2 in stage 642.0 failed 10 times, most recent failure: Lost task 2.9 in stage 642.0 (TID 409, yp-spark-dal09-env5-0049): java.lang.RuntimeException: Database recommendationdb_1482591017: nothing was saved because the number of records was 0!
at com.cloudant.spark.common.JsonStoreDataAccess.saveAll(JsonStoreDataAccess.scala:187)
I know there is data because I also save it to files:
! cat recommendationdb_1482591017.json/*
{"_1":6036,"_2":[{"user":6036,"product":2503,"rating":4.3957030284620355},{"user":6036,"product":2019,"rating":4.351395783537379},{"user":6036,"product":1178,"rating":4.3373212302468165},{"user":6036,"product":923,"rating":4.3328207761734605},{"user":6036,"product":922,"rating":4.320787353937724},{"user":6036,"product":750,"rating":4.307312349612301},{"user":6036,"product":53,"rating":4.304341611330176},{"user":6036,"product":858,"rating":4.297961629128419},{"user":6036,"product":1212,"rating":4.285360675560061},{"user":6036,"product":1423,"rating":4.275255129149407}]}
{"_1":6037,"_2":[{"user":6037,"product":572,"rating":4.578508339835482},{"user":6037,"product":858,"rating":4.247809350206506},{"user":6037,"product":904,"rating":4.1222486445799404},{"user":6037,"product":527,"rating":4.117342524702621},{"user":6037,"product":787,"rating":4.115781026855997},{"user":6037,"product":2503,"rating":4.109861422105844},{"user":6037,"product":1193,"rating":4.088453520710152},{"user":6037,"product":912,"rating":4.085139017248665},{"user":6037,"product":1221,"rating":4.084368219857013},{"user":6037,"product":1207,"rating":4.082536396283374}]}
{"_1":6038,"_2":[{"user":6038,"product":1696,"rating":4.894442132848873},{"user":6038,"product":2998,"rating":4.887752985607918},{"user":6038,"product":2562,"rating":4.740442462948304},{"user":6038,"product":3245,"rating":4.7366090605162094},{"user":6038,"product":2609,"rating":4.736125582066063},{"user":6038,"product":1669,"rating":4.678373819044571},{"user":6038,"product":572,"rating":4.606132758047402},{"user":6038,"product":1493,"rating":4.577140478430046},{"user":6038,"product":745,"rating":4.56568047928448},{"user":6038,"product":213,"rating":4.546054686400765}]}
{"_1":6039,"_2":[{"user":6039,"product":572,"rating":4.685425482619273},{"user":6039,"product":527,"rating":4.291256016077275},{"user":6039,"product":904,"rating":4.27766400846558},{"user":6039,"product":2019,"rating":4.273486883864949},{"user":6039,"product":2905,"rating":4.266371181044469},{"user":6039,"product":912,"rating":4.26006044096224},{"user":6039,"product":1207,"rating":4.259935289367192},{"user":6039,"product":2503,"rating":4.250370780277651},{"user":6039,"product":1148,"rating":4.247288578998062},{"user":6039,"product":745,"rating":4.223697008637559}]}
{"_1":6040,"_2":[{"user":6040,"product":670,"rating":4.682008703927743},{"user":6040,"product":3134,"rating":4.603656534071515},{"user":6040,"product":2503,"rating":4.571906881428182},{"user":6040,"product":3415,"rating":4.523567737705732},{"user":6040,"product":3808,"rating":4.516778146579665},{"user":6040,"product":3245,"rating":4.496176019230939},{"user":6040,"product":53,"rating":4.491020821805015},{"user":6040,"product":668,"rating":4.471757243976877},{"user":6040,"product":3030,"rating":4.464674231353673},{"user":6040,"product":923,"rating":4.446195112198678}]}
{"_1":6042,"_2":[{"user":6042,"product":3389,"rating":3.331488167984286},{"user":6042,"product":572,"rating":3.3312810949271903},{"user":6042,"product":231,"rating":3.2622287749148926},{"user":6042,"product":1439,"rating":3.0988533259613944},{"user":6042,"product":333,"rating":3.0859809743588706},{"user":6042,"product":404,"rating":3.0573976830913203},{"user":6042,"product":216,"rating":3.044620107397873},{"user":6042,"product":408,"rating":3.038302525994588},{"user":6042,"product":2411,"rating":3.0190834747311244},{"user":6042,"product":875,"rating":2.9860048032439095}]}
This is a defect with spark-cloudant 1.6.3 that is fixed with 1.6.4. The pull request is https://github.com/cloudant-labs/spark-cloudant/pull/61
The answer is to upgrade to spark-cloudant 1.6.4. See this answer if you are trying to do that on the IBM Bluemix Spark Service: Spark-cloudant package 1.6.4 loaded by %AddJar does not get used by notebook
Pretty much what it says.
Quiesce node, stop cassandra, upgrade cassandra RPMs from 2.2.5 to 3.0.4 and then start cassandra. When it comes back up:
INFO 13:02:50 Detected version upgrade from 2.2.5 to 3.0.4, snapshotting system keyspace
INFO 13:02:50 Updating topology for all endpoints that have changed
Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): expected a valid value (numbe
r, String, array, object, 'true', 'false' or 'null')
at [Source: java.io.StringReader#27be81e5; line: 1, column: 2]
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): expected a valid value (number, String, array, object, 'true', 'fals
e' or 'null')
at [Source: java.io.StringReader#27be81e5; line: 1, column: 2]
at org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561)
at org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:381)
at org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:363)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
at org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$233(LegacySchemaMigrator.java:237)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
at org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
at org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$230(LegacySchemaMigrator.java:177)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
at org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
And the node dies. I'm stumped.
Fixed: delete everything in the datadir/system*/* and make it rebuild.
I am unable to connect my Erlang application to Cassandra with ErlCassa. I am getting the following error message:
11> {ok, Cl} = erlcassa_client:connect("0.0.0.0", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["0.0.0.0",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
10> {ok, Cl} = erlcassa_client:connect("localhost", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["localhost",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
Erlang version:
Erlang R16B02 (erts-5.10.3) [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Cassandra version:
INFO 12:59:51,051 Cassandra version: 1.1.12
INFO 12:59:51,051 Thrift API version: 19.33.0
INFO 12:59:51,053 CQL supported versions: 2.0.0,3.0.0-beta1 (default: 2.0.0)
I think you need to add this "https://github.com/interline/erlang-thrift" dep into your project.
Because the code of erlcassa tries to use a function "thrift_client_util " of this dep and it can't find it because the dep has not compiled with the project.
During my study of pycassa API i downloaded a sample project Twissandra.
I configured it with cassandra and after login when i ADD tweet the following error occurs ...
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/
Django Version: 1.3.1
Python Version: 2.7.2
Installed Applications:
['django.contrib.sessions', 'tweets', 'users']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'users.middleware.UserMiddleware')
Traceback:
File "C:\Python27\lib\site-packages\django\core\handlers\base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "C:\Users\Muhammad Umair\workspace\Twissandra\src\Twissandra\tweets\views.py" in
timeline
20. 'body': form.cleaned_data['body'],
File "C:\Users\Muhammad Umair\workspace\Twissandra\src\Twissandra\cass.py" in
save_tweet
216. USERLINE.insert(str(username), {ts: str(tweet_id)})
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\columnfamily.py" in insert
860. colval = self._pack_value(columns.values()[0], colname)
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\columnfamily.py" in _pack_value
428. return packer(value)
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\marshal.py" in pack_uuid
202. randomize=True)
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\util.py" in convert_time_to_uuid
66. 'neither a UUID, a datetime, or a number')
Exception Type: ValueError at /
Exception Value: Argument for a v1 UUID column name or value was neither a UUID, a datetime, or a number
Did you modify the Cassandra column families or create them yourself? Maybe you're using an old version of Twissandra?
This particular stacktrace shows that pycassa is expecting a UUID for a column value, but in recent versions of Twissandra, the column values are all BytesType (untyped).