Unable to connect erlang-application to cassandra using erlcassa - cassandra

I am unable to connect my Erlang application to Cassandra with ErlCassa. I am getting the following error message:
11> {ok, Cl} = erlcassa_client:connect("0.0.0.0", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["0.0.0.0",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
10> {ok, Cl} = erlcassa_client:connect("localhost", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["localhost",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
Erlang version:
Erlang R16B02 (erts-5.10.3) [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Cassandra version:
INFO 12:59:51,051 Cassandra version: 1.1.12
INFO 12:59:51,051 Thrift API version: 19.33.0
INFO 12:59:51,053 CQL supported versions: 2.0.0,3.0.0-beta1 (default: 2.0.0)

I think you need to add this "https://github.com/interline/erlang-thrift" dep into your project.
Because the code of erlcassa tries to use a function "thrift_client_util " of this dep and it can't find it because the dep has not compiled with the project.

Related

Retrieving data from IBM DB2 using pyodbc and the related error

I confirm that I have gone through multiple posts in StackOverflow with respect to similar problem, still stuck with the below problem, hence posting to seek guidance/pointers.
Following is the code
import pypyodbc as pyodbc
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
conn_str = 'DRIVER={' + config['db2']['driver'] + '};' \
+ 'SERVER=' + config['db2']['server'] + ';' \
+ 'DATABASE=' + config['db2']['database'] + ';' \
+ 'UID=' + config['db2']['uid'] + ';' \
+ 'PWD=' + config['db2']['password']
print(conn_str)
connection = pyodbc.connect(
conn_str
)
cur = connection.cursor()
cur.execute('SELECT col_1, col_2 FROM schema.table_name LIMIT 2')
for row in cur:
print (row)
Output from code execution
[connect string output]
DRIVER={'IBM i Access ODBC Driver 64-bit'};SERVER='hostname';DATABASE='database';UID='userid';PWD='password'
[error from executing the code]
raise Error(state,err_text)
pypyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified')
Configuration file
$ cat config.ini
[db2]
driver = 'IBM i Access ODBC Driver 64-bit'
server = 'hostname'
database = 'database'
uid = 'userid'
password = 'password'
Output of ODBC installer and uninstaller command
odbcinst -j
unixODBC 2.3.1
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/useradmin/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
cat /etc/odbcinst.ini
[PostgreSQL]
Description=ODBC for PostgreSQL
Driver=/usr/lib/psqlodbcw.so
Setup=/usr/lib/libodbcpsqlS.so
Driver64=/usr/lib64/psqlodbcw.so
Setup64=/usr/lib64/libodbcpsqlS.so
FileUsage=1
[MySQL]
Description=ODBC for MySQL
Driver=/usr/lib/libmyodbc5.so
Setup=/usr/lib/libodbcmyS.so
Driver64=/usr/lib64/libmyodbc5.so
Setup64=/usr/lib64/libodbcmyS.so
FileUsage=1
[IBM i Access ODBC Driver]
Description=IBM i Access for Linux ODBC Driver
Driver=/opt/ibm/iaccess/lib/libcwbodbc.so
Setup=/opt/ibm/iaccess/lib/libcwbodbcs.so
Driver64=/opt/ibm/iaccess/lib64/libcwbodbc.so
Setup64=/opt/ibm/iaccess/lib64/libcwbodbcs.so
Threading=0
DontDLClose=1
UsageCount=1
[IBM i Access ODBC Driver 64-bit]
Description=IBM i Access for Linux 64-bit ODBC Driver
Driver=/opt/ibm/iaccess/lib64/libcwbodbc.so
Setup=/opt/ibm/iaccess/lib64/libcwbodbcs.so
Threading=0
DontDLClose=1
UsageCount=1
$ cat ~/.odbc.ini
[db2]
Driver = IBM i Access ODBC Driver 64-bit
DATABASE = 'database'
SYSTEM = hostname
HOSTNAME = hostname
PORT = 446
PROTOCOL = TCPIP
$ isql db2 $username $password -v
[08001][unixODBC][IBM][System i Access ODBC Driver]The specified database can not be accessed at this time.
[ISQL]ERROR: Could not SQLConnect
I have double checked and confirm that there is no typo with driver name "IBM i Access ODBC Driver 64-bit"
OS information
x86_64 GNU/Linux
Any pointers/guidance on how to debug the issue, please?
I think you are confusing the schema name with the database name.
Odds are you can omit the database name completely (or leave it empty string and let it default to *SYSBAS). Instead, you can specify the DefaultLibraries argument.
See the IBM doc here for info on the valid connection string (and odbc.ini) keywords.
Similarly, you can omit the PORT, PROTOCOL, and HOSTNAME keywords, as they're not supported by this driver.
That will leave you with an odbc.ini that looks as simple as this:
[db2]
Driver = IBM i Access ODBC Driver 64-bit
DefaultLibraries = 'database'
SYSTEM = hostname

schemacrawler oracle-plugin not returning SYNONYMS

I am using schema-crawler to crawl an oracle database (Retrieve Table/Synonym metadata including columns details and foreign keys)
INFO:
-- generated by: SchemaCrawler 16.15.1
-- database: Oracle Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
-- driver: Oracle JDBC driver 19.3.0.0.0
-- JVM system: AdoptOpenJDK OpenJDK 64-Bit Server VM 1.8.0_292-b10
In my POM I have included the oracle plugin
<groupId>us.fatehi</groupId>
<artifactId>schemacrawler-oracle</artifactId>
<version>${schemacrawler.version}</version>
</dependency>
I have set the following in LimitOptionsBuilder & LoadOptions to crawl Schema:
limitOptionsBuilder.tableTypes("TABLE,VIEW,SYNONYM");
limitOptionsBuilder.includeAllSynonyms();
final SchemaCrawlerOptions options = SchemaCrawlerOptionsBuilder.newSchemaCrawlerOptions()
.withLimitOptions(limitOptionsBuilder.toOptions())
.withLoadOptions(loadOptionsBuilder.toOptions());
Catalog cat = SchemaCrawlerUtility.getCatalog(conn, options);
In the Catalog output, I don't see any SYNONYMS. I did some debugging and it seems that the the query that is sent to the database to get the tables is using DBA_TAB_COMMENTS, which unfortunately does not contain SYNONYM information. In oracle synonyms are stored in ALL_SYNONYMS
SELECT
NULL AS TABLE_CAT,
TABLES.OWNER AS TABLE_SCHEM,
TABLES.TABLE_NAME AS TABLE_NAME,
TABLES.TABLE_TYPE AS TABLE_TYPE,
TABLES.COMMENTS AS REMARKS
FROM
DBA_TAB_COMMENTS TABLES
WHERE
TABLES.OWNER NOT IN
('ANONYMOUS', 'APEX_PUBLIC_USER', 'APPQOSSYS', 'BI', 'CTXSYS', 'DBSNMP', 'DIP',
'EXFSYS', 'FLOWS_30000', 'FLOWS_FILES', 'GSMADMIN_INTERNAL', 'IX', 'LBACSYS',
'MDDATA', 'MDSYS', 'MGMT_VIEW', 'OE', 'OLAPSYS', 'ORACLE_OCM',
'ORDPLUGINS', 'ORDSYS', 'OUTLN', 'OWBSYS', 'PM', 'SCOTT', 'SH',
'SI_INFORMTN_SCHEMA', 'SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR',
'SYS', 'SYSMAN', 'SYSTEM', 'TSMSYS', 'WKPROXY', 'WKSYS', 'WK_TEST',
'WMSYS', 'XDB', 'XS$NULL', 'RDSADMIN')
AND NOT REGEXP_LIKE(TABLES.OWNER, '^APEX_[0-9]{6}$')
AND NOT REGEXP_LIKE(TABLES.OWNER, '^FLOWS_[0-9]{5,6}$')
AND REGEXP_LIKE(TABLES.OWNER, '${schemas}')
AND TABLES.TABLE_NAME NOT LIKE 'BIN$%'
AND NOT REGEXP_LIKE(TABLES.TABLE_NAME, '^(SYS_IOT|MDOS|MDRS|MDRT|MDOT|MDXT)_.*$')
UNION ALL
SELECT
NULL AS TABLE_CAT,
MVIEWS.OWNER AS TABLE_SCHEM,
MVIEWS.MVIEW_NAME AS TABLE_NAME,
'MATERIALIZED VIEW' AS TABLE_TYPE,
MVIEWS.COMMENTS AS REMARKS
FROM
DBA_MVIEW_COMMENTS MVIEWS
WHERE
MVIEWS.OWNER NOT IN
('ANONYMOUS', 'APEX_PUBLIC_USER', 'APPQOSSYS', 'BI', 'CTXSYS', 'DBSNMP', 'DIP',
'EXFSYS', 'FLOWS_30000', 'FLOWS_FILES', 'GSMADMIN_INTERNAL', 'IX', 'LBACSYS',
'MDDATA', 'MDSYS', 'MGMT_VIEW', 'OE', 'OLAPSYS', 'ORACLE_OCM',
'ORDPLUGINS', 'ORDSYS', 'OUTLN', 'OWBSYS', 'PM', 'SCOTT', 'SH',
'SI_INFORMTN_SCHEMA', 'SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR',
'SYS', 'SYSMAN', 'SYSTEM', 'TSMSYS', 'WKPROXY', 'WKSYS', 'WK_TEST',
'WMSYS', 'XDB', 'XS$NULL', 'RDSADMIN')
AND NOT REGEXP_LIKE(MVIEWS.OWNER, '^APEX_[0-9]{6}$')
AND NOT REGEXP_LIKE(MVIEWS.OWNER, '^FLOWS_[0-9]{5,6}$')
AND REGEXP_LIKE(MVIEWS.OWNER, '${schemas}')```

how to configure and run Reaper to repair cassandra in linux( centos environment)

I'm trying to install and run Reaper 1.4 on my centos VM. And followed the same installation step as in given video (https://www.youtube.com/watch?v=0dub29BgwPI), but still no success in getting reaper started.Can anyone please help me with proper/complete document. however i have read and followed
http://cassandra-reaper.io/docs/download/
Below given is my cassandra-reaper.yaml settings:
segmentCountPerNode: 16
repairParallelism: DATACENTER_AWARE
repairIntensity: 0.9
scheduleDaysBetween: 7
repairRunThreadCount: 15
hangingRepairTimeoutMins: 30
storageType: cassandra
enableCrossOrigin: true
incrementalRepair: false
blacklistTwcsTables: false
enableDynamicSeedList: true
repairManagerSchedulingIntervalSeconds: 10
activateQueryLogger: false
jmxConnectionTimeoutInSeconds: 5
useAddressTranslator: false
# purgeRecordsAfterInDays: 30
# numberOfRunsToKeepPerUnit: 10
jmxPorts:
#127.0.0.1: 7100
#10.X.X.X: 7199
#127.0.0.2: 7200
#127.0.0.3: 7300
#127.0.0.4: 7400
#127.0.0.5: 7500
#127.0.0.6: 7600
#127.0.0.7: 7700
#127.0.0.8: 7800
jmxAuth:
username: *****
password: *****
server:
type: default
applicationConnectors:
- type: http
port: 8080
bindHost: 0.0.0.0
adminConnectors:
- type: http
port: 8081
bindHost: 0.0.0.0
requestLog:
appenders: []
cassandra:
clusterName: "dc1"
contactPoints: ["10.X.X.1","10.X.X.2","10.X.X.3","10.X.X.4","10.X.X.5"]
#contactPoints: ["127.0.0.1"]
keyspace: "reaper_db"
loadBalancingPolicy:
type: tokenAware
shuffleReplicas: true
subPolicy:
type: dcAwareRoundRobin
localDC:
usedHostsPerRemoteDC: 0
allowRemoteDCsForLocalConsistencyLevel: false
authProvider:
type: plainText
username: cass
password: cass
ssl:
type: jdk
autoScheduling:
enabled: false
initialDelayPeriod: PT15S
periodBetweenPolls: PT10M
timeBeforeFirstSchedule: PT5M
scheduleSpreadPeriod: PT6H
excludedKeyspaces:
- keyspace1
- keyspace2
accessControl:
sessionTimeout: PT10M
shiro:
iniConfigs: ["classpath:shiro.ini"]
log from /var/log/cassandra-reaper/reaper.log
INFO [main] i.c.ReaperApplication - initializing runner thread pool with 15 threads
INFO [main] i.c.ReaperApplication - initializing storage of type: cassandra
INFO [main] c.d.d.core - DataStax Java driver 3.5.0 for Apache Cassandra
INFO [main] c.d.d.c.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer
INFO [main] c.d.d.c.ClockFactory - Using native clock to generate timestamps.
INFO [main] c.d.d.c.NettyUtil - Found Netty's native epoll transport in the classpath, using it
INFO [main] o.a.s.c.ReflectionBuilder - An instance with name 'authc' already exists. Redefining this object as a new instance of type org.apache.shiro.web.filter.authc.PassThruAuthenticationFilter
log from /var/log/cassandra-reaper.err
at org.yaml.snakeyaml.scanner.ScannerImpl.fetchMoreTokens(ScannerImpl.java:415)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:226)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingValue.produce(ParserImpl.java:586)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:168)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:347)
... 11 more
ls: cannot access server/target/cassandra-reaper-*.jar: No such file or directory
io.dropwizard.configuration.ConfigurationParsingException: /etc/cassandra-reaper/cassandra-reaper.yaml has an error:
* Malformed YAML at line: 27, column: 11; while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
^
at [Source: (ByteArrayInputStream); line: 26, column: 10]
at io.dropwizard.configuration.ConfigurationParsingException$Builder.build(ConfigurationParsingException.java:279)
at io.dropwizard.configuration.BaseConfigurationFactory.build(BaseConfigurationFactory.java:96)
at io.dropwizard.cli.ConfiguredCommand.parseConfiguration(ConfiguredCommand.java:126)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:74)
at io.dropwizard.cli.Cli.run(Cli.java:78)
at io.dropwizard.Application.run(Application.java:93)
at io.cassandrareaper.ReaperApplication.main(ReaperApplication.java:99)
Caused by: com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
^
Malformed YAML at line: 27, column: 11; while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
You need to remove any tab whitespaces in your yaml file and replace it with 4 spaces instead.
See the answer here for why this is common when manipulating YAML files.
A YAML file cannot contain tabs as indentation

DSE cassandra not starting

faced with a problem, we have cluster of 5 nodes after restart dse trying to start without success the last record in system.log is below...
Tried with heap and 48 and 64, node has 128GB. Three of them started but these two cannot, no error in the log just that record.
INFO [main] 2017-05-16 21:16:27,507 CassandraDaemon.java:487 - JVM Arguments: [-Ddse.server_process, -XX:+AlwaysPreTouch, -Dcassandra.disable_auth_caches_remote_configuration=false, -Dcassandra.force_default_indexing_page_size=false, -Dcassandra.join_ring=true, -Dcassandra.load_ring_state=true, -Dcassandra.write_survey=false, -XX:CMSInitiatingOccupancyFraction=75, -XX:CMSWaitDuration=10000, -ea, -XX:G1RSetUpdatingPauseTimePercent=5, -XX:+HeapDumpOnOutOfMemoryError, -Xms16G, -Djava.net.preferIPv4Stack=true, -XX:MaxGCPauseMillis=500, -Xmx16G, -XX:MaxTenuringThreshold=1, -Xss256k, -XX:+PerfDisableSharedMem, -XX:+ResizeTLAB, -XX:StringTableSize=1000003, -XX:SurvivorRatio=8, -XX:ThreadPriorityPolicy=42, -XX:+UseThreadPriorities, -XX:+UseTLAB, -XX:+UseG1GC, -Dcom.sun.management.jmxremote.authenticate=false, -Dcassandra.jmx.local.port=7199, -XX:CompileCommandFile=/etc/dse/cassandra/hotspot_compiler, -javaagent:/usr/share/dse/cassandra/lib/jamm-0.3.0.jar, -Djava.library.path=/usr/share/dse/hadoop2-client/lib/native:/usr/share/dse/cassandra/lib/sigar-bin:/usr/share/dse/hadoop2-client/lib/native:/usr/share/dse/cassandra/lib/sigar-bin:, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config.DseConfigurationLoader, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/var/log/cassandra, -Dcassandra.storagedir=/usr/share/dse/data, -Dcassandra-pidfile=/var/run/dse/dse.pid, -Dgraph-enabled=true, -XX:HeapDumpPath=/var/lib/cassandra/java_1494958565.hprof, -XX:ErrorFile=/var/lib/cassandra/hs_err_1494958565.log, -Dguice_include_stack_traces=OFF, -Ddse.system_memory_in_mb=128658, -Dcassandra.config.loader=com.datastax.bdp.config

Cassandra 2.2.5 to 3.0.4 upgrade fails

Pretty much what it says.
Quiesce node, stop cassandra, upgrade cassandra RPMs from 2.2.5 to 3.0.4 and then start cassandra. When it comes back up:
INFO 13:02:50 Detected version upgrade from 2.2.5 to 3.0.4, snapshotting system keyspace
INFO 13:02:50 Updating topology for all endpoints that have changed
Exception (java.lang.RuntimeException) encountered during startup: org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): expected a valid value (numbe
r, String, array, object, 'true', 'false' or 'null')
at [Source: java.io.StringReader#27be81e5; line: 1, column: 2]
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected character ('K' (code 75)): expected a valid value (number, String, array, object, 'true', 'fals
e' or 'null')
at [Source: java.io.StringReader#27be81e5; line: 1, column: 2]
at org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561)
at org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:381)
at org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:363)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
at org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$233(LegacySchemaMigrator.java:237)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
at org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
at org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$230(LegacySchemaMigrator.java:177)
at java.util.ArrayList.forEach(ArrayList.java:1249)
at org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
at org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679)
And the node dies. I'm stumped.
Fixed: delete everything in the datadir/system*/* and make it rebuild.

Resources