how to configure and run Reaper to repair cassandra in linux( centos environment) - cassandra

I'm trying to install and run Reaper 1.4 on my centos VM. And followed the same installation step as in given video (https://www.youtube.com/watch?v=0dub29BgwPI), but still no success in getting reaper started.Can anyone please help me with proper/complete document. however i have read and followed
http://cassandra-reaper.io/docs/download/
Below given is my cassandra-reaper.yaml settings:
segmentCountPerNode: 16
repairParallelism: DATACENTER_AWARE
repairIntensity: 0.9
scheduleDaysBetween: 7
repairRunThreadCount: 15
hangingRepairTimeoutMins: 30
storageType: cassandra
enableCrossOrigin: true
incrementalRepair: false
blacklistTwcsTables: false
enableDynamicSeedList: true
repairManagerSchedulingIntervalSeconds: 10
activateQueryLogger: false
jmxConnectionTimeoutInSeconds: 5
useAddressTranslator: false
# purgeRecordsAfterInDays: 30
# numberOfRunsToKeepPerUnit: 10
jmxPorts:
#127.0.0.1: 7100
#10.X.X.X: 7199
#127.0.0.2: 7200
#127.0.0.3: 7300
#127.0.0.4: 7400
#127.0.0.5: 7500
#127.0.0.6: 7600
#127.0.0.7: 7700
#127.0.0.8: 7800
jmxAuth:
username: *****
password: *****
server:
type: default
applicationConnectors:
- type: http
port: 8080
bindHost: 0.0.0.0
adminConnectors:
- type: http
port: 8081
bindHost: 0.0.0.0
requestLog:
appenders: []
cassandra:
clusterName: "dc1"
contactPoints: ["10.X.X.1","10.X.X.2","10.X.X.3","10.X.X.4","10.X.X.5"]
#contactPoints: ["127.0.0.1"]
keyspace: "reaper_db"
loadBalancingPolicy:
type: tokenAware
shuffleReplicas: true
subPolicy:
type: dcAwareRoundRobin
localDC:
usedHostsPerRemoteDC: 0
allowRemoteDCsForLocalConsistencyLevel: false
authProvider:
type: plainText
username: cass
password: cass
ssl:
type: jdk
autoScheduling:
enabled: false
initialDelayPeriod: PT15S
periodBetweenPolls: PT10M
timeBeforeFirstSchedule: PT5M
scheduleSpreadPeriod: PT6H
excludedKeyspaces:
- keyspace1
- keyspace2
accessControl:
sessionTimeout: PT10M
shiro:
iniConfigs: ["classpath:shiro.ini"]
log from /var/log/cassandra-reaper/reaper.log
INFO [main] i.c.ReaperApplication - initializing runner thread pool with 15 threads
INFO [main] i.c.ReaperApplication - initializing storage of type: cassandra
INFO [main] c.d.d.core - DataStax Java driver 3.5.0 for Apache Cassandra
INFO [main] c.d.d.c.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer
INFO [main] c.d.d.c.ClockFactory - Using native clock to generate timestamps.
INFO [main] c.d.d.c.NettyUtil - Found Netty's native epoll transport in the classpath, using it
INFO [main] o.a.s.c.ReflectionBuilder - An instance with name 'authc' already exists. Redefining this object as a new instance of type org.apache.shiro.web.filter.authc.PassThruAuthenticationFilter
log from /var/log/cassandra-reaper.err
at org.yaml.snakeyaml.scanner.ScannerImpl.fetchMoreTokens(ScannerImpl.java:415)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:226)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingValue.produce(ParserImpl.java:586)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:168)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:347)
... 11 more
ls: cannot access server/target/cassandra-reaper-*.jar: No such file or directory
io.dropwizard.configuration.ConfigurationParsingException: /etc/cassandra-reaper/cassandra-reaper.yaml has an error:
* Malformed YAML at line: 27, column: 11; while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
^
at [Source: (ByteArrayInputStream); line: 26, column: 10]
at io.dropwizard.configuration.ConfigurationParsingException$Builder.build(ConfigurationParsingException.java:279)
at io.dropwizard.configuration.BaseConfigurationFactory.build(BaseConfigurationFactory.java:96)
at io.dropwizard.cli.ConfiguredCommand.parseConfiguration(ConfiguredCommand.java:126)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:74)
at io.dropwizard.cli.Cli.run(Cli.java:78)
at io.dropwizard.Application.run(Application.java:93)
at io.cassandrareaper.ReaperApplication.main(ReaperApplication.java:99)
Caused by: com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
^

Malformed YAML at line: 27, column: 11; while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
You need to remove any tab whitespaces in your yaml file and replace it with 4 spaces instead.
See the answer here for why this is common when manipulating YAML files.
A YAML file cannot contain tabs as indentation

Related

DSE cassandra not starting

faced with a problem, we have cluster of 5 nodes after restart dse trying to start without success the last record in system.log is below...
Tried with heap and 48 and 64, node has 128GB. Three of them started but these two cannot, no error in the log just that record.
INFO [main] 2017-05-16 21:16:27,507 CassandraDaemon.java:487 - JVM Arguments: [-Ddse.server_process, -XX:+AlwaysPreTouch, -Dcassandra.disable_auth_caches_remote_configuration=false, -Dcassandra.force_default_indexing_page_size=false, -Dcassandra.join_ring=true, -Dcassandra.load_ring_state=true, -Dcassandra.write_survey=false, -XX:CMSInitiatingOccupancyFraction=75, -XX:CMSWaitDuration=10000, -ea, -XX:G1RSetUpdatingPauseTimePercent=5, -XX:+HeapDumpOnOutOfMemoryError, -Xms16G, -Djava.net.preferIPv4Stack=true, -XX:MaxGCPauseMillis=500, -Xmx16G, -XX:MaxTenuringThreshold=1, -Xss256k, -XX:+PerfDisableSharedMem, -XX:+ResizeTLAB, -XX:StringTableSize=1000003, -XX:SurvivorRatio=8, -XX:ThreadPriorityPolicy=42, -XX:+UseThreadPriorities, -XX:+UseTLAB, -XX:+UseG1GC, -Dcom.sun.management.jmxremote.authenticate=false, -Dcassandra.jmx.local.port=7199, -XX:CompileCommandFile=/etc/dse/cassandra/hotspot_compiler, -javaagent:/usr/share/dse/cassandra/lib/jamm-0.3.0.jar, -Djava.library.path=/usr/share/dse/hadoop2-client/lib/native:/usr/share/dse/cassandra/lib/sigar-bin:/usr/share/dse/hadoop2-client/lib/native:/usr/share/dse/cassandra/lib/sigar-bin:, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/var/log/cassandra, -Dcassandra.storagedir=/usr/share/dse/data, -Dcassandra-pidfile=/var/run/dse/dse.pid, -Dgraph-enabled=true, -XX:HeapDumpPath=/var/lib/cassandra/java_1494958565.hprof, -XX:ErrorFile=/var/lib/cassandra/hs_err_1494958565.log, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config

Spark Cloudant error: 'nothing was saved because the number of records was 0!'

I'm using the spark-cloudant library 1.6.3 that is installed by default with the spark service.
I'm trying to save some data to Cloudant:
val df = getTopXRecommendationsForAllUsers().toDF.filter( $"_1" > 6035)
println(s"Saving ${df.count()} ratings to Cloudant: " + new Date())
println(df.show(5))
val timestamp: Long = System.currentTimeMillis / 1000
val dbName: String = s"${destDB.database}_${timestamp}"
df.write.mode("append").json(s"${dbName}.json")
val dfWriter = df.write.format("com.cloudant.spark")
dfWriter.option("cloudant.host", destDB.host)
if (destDB.username.isDefined && destDB.username.get.nonEmpty) dfWriter.option("cloudant.username", destDB.username.get)
if (destDB.password.isDefined && destDB.password.get.nonEmpty) dfWriter.option("cloudant.password", destDB.password.get)
dfWriter.save(dbName)
However, I hit the error:
Starting getTopXRecommendationsForAllUsers: Sat Dec 24 08:50:11 CST 2016
Finished getTopXRecommendationsForAllUsers: Sat Dec 24 08:50:11 CST 2016
Saving 6 ratings to Cloudant: Sat Dec 24 08:50:17 CST 2016
+----+--------------------+
| _1| _2|
+----+--------------------+
|6036|[[6036,2503,4.395...|
|6037|[[6037,572,4.5785...|
|6038|[[6038,1696,4.894...|
|6039|[[6039,572,4.6854...|
|6040|[[6040,670,4.6820...|
+----+--------------------+
only showing top 5 rows
()
Use connectorVersion=1.6.3, dbName=recommendationdb_1482591017, indexName=null, viewName=null,jsonstore.rdd.partitions=5, + jsonstore.rdd.maxInPartition=-1,jsonstore.rdd.minInPartition=10, jsonstore.rdd.requestTimeout=900000,bulkSize=20, schemaSampleSize=1
Name: org.apache.spark.SparkException
Message: Job aborted due to stage failure: Task 2 in stage 642.0 failed 10 times, most recent failure: Lost task 2.9 in stage 642.0 (TID 409, yp-spark-dal09-env5-0049): java.lang.RuntimeException: Database recommendationdb_1482591017: nothing was saved because the number of records was 0!
at com.cloudant.spark.common.JsonStoreDataAccess.saveAll(JsonStoreDataAccess.scala:187)
I know there is data because I also save it to files:
! cat recommendationdb_1482591017.json/*
{"_1":6036,"_2":[{"user":6036,"product":2503,"rating":4.3957030284620355},{"user":6036,"product":2019,"rating":4.351395783537379},{"user":6036,"product":1178,"rating":4.3373212302468165},{"user":6036,"product":923,"rating":4.3328207761734605},{"user":6036,"product":922,"rating":4.320787353937724},{"user":6036,"product":750,"rating":4.307312349612301},{"user":6036,"product":53,"rating":4.304341611330176},{"user":6036,"product":858,"rating":4.297961629128419},{"user":6036,"product":1212,"rating":4.285360675560061},{"user":6036,"product":1423,"rating":4.275255129149407}]}
{"_1":6037,"_2":[{"user":6037,"product":572,"rating":4.578508339835482},{"user":6037,"product":858,"rating":4.247809350206506},{"user":6037,"product":904,"rating":4.1222486445799404},{"user":6037,"product":527,"rating":4.117342524702621},{"user":6037,"product":787,"rating":4.115781026855997},{"user":6037,"product":2503,"rating":4.109861422105844},{"user":6037,"product":1193,"rating":4.088453520710152},{"user":6037,"product":912,"rating":4.085139017248665},{"user":6037,"product":1221,"rating":4.084368219857013},{"user":6037,"product":1207,"rating":4.082536396283374}]}
{"_1":6038,"_2":[{"user":6038,"product":1696,"rating":4.894442132848873},{"user":6038,"product":2998,"rating":4.887752985607918},{"user":6038,"product":2562,"rating":4.740442462948304},{"user":6038,"product":3245,"rating":4.7366090605162094},{"user":6038,"product":2609,"rating":4.736125582066063},{"user":6038,"product":1669,"rating":4.678373819044571},{"user":6038,"product":572,"rating":4.606132758047402},{"user":6038,"product":1493,"rating":4.577140478430046},{"user":6038,"product":745,"rating":4.56568047928448},{"user":6038,"product":213,"rating":4.546054686400765}]}
{"_1":6039,"_2":[{"user":6039,"product":572,"rating":4.685425482619273},{"user":6039,"product":527,"rating":4.291256016077275},{"user":6039,"product":904,"rating":4.27766400846558},{"user":6039,"product":2019,"rating":4.273486883864949},{"user":6039,"product":2905,"rating":4.266371181044469},{"user":6039,"product":912,"rating":4.26006044096224},{"user":6039,"product":1207,"rating":4.259935289367192},{"user":6039,"product":2503,"rating":4.250370780277651},{"user":6039,"product":1148,"rating":4.247288578998062},{"user":6039,"product":745,"rating":4.223697008637559}]}
{"_1":6040,"_2":[{"user":6040,"product":670,"rating":4.682008703927743},{"user":6040,"product":3134,"rating":4.603656534071515},{"user":6040,"product":2503,"rating":4.571906881428182},{"user":6040,"product":3415,"rating":4.523567737705732},{"user":6040,"product":3808,"rating":4.516778146579665},{"user":6040,"product":3245,"rating":4.496176019230939},{"user":6040,"product":53,"rating":4.491020821805015},{"user":6040,"product":668,"rating":4.471757243976877},{"user":6040,"product":3030,"rating":4.464674231353673},{"user":6040,"product":923,"rating":4.446195112198678}]}
{"_1":6042,"_2":[{"user":6042,"product":3389,"rating":3.331488167984286},{"user":6042,"product":572,"rating":3.3312810949271903},{"user":6042,"product":231,"rating":3.2622287749148926},{"user":6042,"product":1439,"rating":3.0988533259613944},{"user":6042,"product":333,"rating":3.0859809743588706},{"user":6042,"product":404,"rating":3.0573976830913203},{"user":6042,"product":216,"rating":3.044620107397873},{"user":6042,"product":408,"rating":3.038302525994588},{"user":6042,"product":2411,"rating":3.0190834747311244},{"user":6042,"product":875,"rating":2.9860048032439095}]}
This is a defect with spark-cloudant 1.6.3 that is fixed with 1.6.4. The pull request is https://github.com/cloudant-labs/spark-cloudant/pull/61
The answer is to upgrade to spark-cloudant 1.6.4. See this answer if you are trying to do that on the IBM Bluemix Spark Service: Spark-cloudant package 1.6.4 loaded by %AddJar does not get used by notebook

Cassandra 2.2.5 to 3.0.4 upgrade fails

Pretty much what it says.
Quiesce node, stop cassandra, upgrade cassandra RPMs from 2.2.5 to 3.0.4 and then start cassandra. When it comes back up:
INFO 13:02:50 Detected version upgrade from 2.2.5 to 3.0.4, snapshotting system keyspace
INFO 13:02:50 Updating topology for all endpoints that have changed
Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): expected a valid value (numbe
r, String, array, object, 'true', 'false' or 'null')
at [Source: java.io.StringReader#27be81e5; line: 1, column: 2]
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): expected a valid value (number, String, array, object, 'true', 'fals
e' or 'null')
at [Source: java.io.StringReader#27be81e5; line: 1, column: 2]
at org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561)
at org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:381)
at org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:363)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
at org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$233(LegacySchemaMigrator.java:237)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
at org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
at org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$230(LegacySchemaMigrator.java:177)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
at org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
And the node dies. I'm stumped.
Fixed: delete everything in the datadir/system*/* and make it rebuild.

Unable to connect erlang-application to cassandra using erlcassa

I am unable to connect my Erlang application to Cassandra with ErlCassa. I am getting the following error message:
11> {ok, Cl} = erlcassa_client:connect("0.0.0.0", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["0.0.0.0",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
10> {ok, Cl} = erlcassa_client:connect("localhost", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["localhost",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
Erlang version:
Erlang R16B02 (erts-5.10.3) [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Cassandra version:
INFO 12:59:51,051 Cassandra version: 1.1.12
INFO 12:59:51,051 Thrift API version: 19.33.0
INFO 12:59:51,053 CQL supported versions: 2.0.0,3.0.0-beta1 (default: 2.0.0)
I think you need to add this "https://github.com/interline/erlang-thrift" dep into your project.
Because the code of erlcassa tries to use a function "thrift_client_util " of this dep and it can't find it because the dep has not compiled with the project.

Accessing cassandra using pycassa

During my study of pycassa API i downloaded a sample project Twissandra.
I configured it with cassandra and after login when i ADD tweet the following error occurs ...
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/
Django Version: 1.3.1
Python Version: 2.7.2
Installed Applications:
['django.contrib.sessions', 'tweets', 'users']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'users.middleware.UserMiddleware')
Traceback:
File "C:\Python27\lib\site-packages\django\core\handlers\base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "C:\Users\Muhammad Umair\workspace\Twissandra\src\Twissandra\tweets\views.py" in
timeline
20. 'body': form.cleaned_data['body'],
File "C:\Users\Muhammad Umair\workspace\Twissandra\src\Twissandra\cass.py" in
save_tweet
216. USERLINE.insert(str(username), {ts: str(tweet_id)})
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\columnfamily.py" in insert
860. colval = self._pack_value(columns.values()[0], colname)
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\columnfamily.py" in _pack_value
428. return packer(value)
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\marshal.py" in pack_uuid
202. randomize=True)
File "C:\Python27\lib\site-packages\pycassa-1.3.0-py2.7.egg\pycassa\util.py" in convert_time_to_uuid
66. 'neither a UUID, a datetime, or a number')
Exception Type: ValueError at /
Exception Value: Argument for a v1 UUID column name or value was neither a UUID, a datetime, or a number
Did you modify the Cassandra column families or create them yourself? Maybe you're using an old version of Twissandra?
This particular stacktrace shows that pycassa is expecting a UUID for a column value, but in recent versions of Twissandra, the column values are all BytesType (untyped).

Resources