MemSQL Ops schema - No databases - singlestore

I have upgraded MemSQL Ops to version 5.1.2 and deleted some unwanted nodes. Now when I click Explore > Schema I get the following error:-
Error: (1735, "Unable to connect to leaf #139.59.144.183:3307 with user root, using
password NO: [2004] Cannot connect to '139.59.144.183':3307. Errno=111 (Connection refused)")
No databases
The leaf node on 139.59.144.183 does not show up when i run:-
memsql-ops memsql-list
It looks like the node was not completely deleted. Is there a way to manually delete a partially deleted node?

The error you are seeing on the schema page is there because the leaf still shows up in show leaves on the master aggregator. The solution is to run REMOVE LEAF for the offending leaf on the master aggregator directly.
More information available at https://docs.memsql.com/docs/remove-leaf

Related

CouchDB v1.7.1 database replication to CouchDB v2.3.0 database fails

In Fauxton, I've setup a replication rule from a CouchDB v1.7.1 database to a new CouchDB v2.3.0 database.
The source does not have any authentication configured. The target does. I've added the username and password to the Job Configuration.
It looks like the replication got stuck somewhere in the process. 283.8 KB (433 documents) are present in the new database. The source contains about 18.7 MB (7215 docs) of data.
When restarting the database, I'm always getting the following error:
[error] 2019-02-17T17:29:45.959000Z nonode#nohost <0.602.0> --------
throw:{unauthorized,<<"unauthorized to access or create database
http://my-website.com/target-database-name/">>}:
Replication 5b4ee9ddc57bcad01e549ce43f5e31bc+continuous failed to
start "https://my-website.com/source-database-name/ "
-> "http://my-website.com/target-database-name/ " doc
<<"shards/00000000-1fffffff/_replicator.1550593615">>:<<"1e498a86ba8e3349692cc1c51a00037a">>
stack:[{couch_replicator_api_wrap,db_open,4,[{file,"src/couch_replicator_api_wrap.erl"},{line,114}]},{couch_replicator_scheduler_job,init_state,1,[{file,"src/couch_replicator_scheduler_job.erl"},{line,584}]}]
I'm not sure what is going on here. From the logs I understand there's an authorization issue. But the database is already present (hence, it has been replicated partially already).
What does this error mean and how can it be resolved?
The reason for this error is that the CouchDB v2.3.0 instance was being re-initialized on reboot. It required me to fill-in the cluster configuration again.
Therefore, the replication could not continue until I had the configuration re-applied.
The issue with having to re-apply the cluster configuration has been solved in another SO question.

Unable to start Kudu master

While starting kudu-master, I am getting the below error and unable to start kudu cluster.
F0706 10:21:33.464331 27576 master_main.cc:71] Check failed: _s.ok() Bad status: Invalid argument: Unable to initialize catalog manager: Failed to initialize sys tables async: on-disk master list (hadoop-master:7051, slave2:7051, slave3:7051) and provided master list (:0) differ. Their symmetric difference is: :0, hadoop-master:7051, slave2:7051, slave3:7051
It is a cluster of 8 nodes and i have provided 3 masters as given below in master.gflagfile on master nodes.
--master_addresses=hadoop-master,slave2,slave3
TL;DR
If this is a new installation, working under the assumption that master ip addresses are correct, I believe the easiest solution is to
Stop kudu masters
Nuke the <kudu-data-dir>/master directory
Start kudu masters
Explanation
I believe the most common (if not only) cause of this error (Failed to initialize sys tables async: on-disk master list (hadoop-master:7051, slave2:7051, slave3:7051) and provided master list (:0) differ.) is when a kudu master node gets added incorrectly. The error suggests that kudu-master thinks it's running on a single node rather than 3-node cluster.
Maybe you did not intend to "add a node", but that's most likely what happened. I'm saying this because I had the same problem; after some googling and debugging, I discovered that during the installation, I started kudu-master before putting the correct IP address in master.gflagfile, so that kudu-master was spun up thinking it was running on a single node, not 3 node. Using steps above to clean install kudu-master again, my problem was solved.

Remote access to leaf node when using memsql-spark-connector

I'm trying to test memsql-spark-connector and for this I created a single node MemSQL cluster on AWS (https://docs.memsql.com/docs/quick-start-with-amazon-webservices).
On my laptop I want to run a Spark application in local mode. This application should simply create Dataframe for a table and collect all rows. Here is the code:
val conf = new SparkConf()
.setAppName("Test App")
.setMaster("local[*]")
.set("memsql.host", "x.x.x.x")
.set("memsql.port", "3306")
.set("memsql.user", "root")
.set("memsql.password", "1234")
.set("memsql.defaultDatabase", "dataframes_test")
val sc = new SparkContext(conf)
val memsql = new MemSQLContext(sc)
val df = memsql.table("person")
df.collect().foreach(println(_))
where x.x.x.x is the address of my AWS instance.
The problem is although I can connect to MemSQL server from my laptop, memsql-spark-connector tries to access leaf node directly (i.e. connect to port 3307 instead of 3306). And when this happens I get the following error:
java.sql.SQLException: Access denied for user 'root'#'108.208.196.149' (using password: YES)
But root user actually does have all permissions:
memsql> show grants for 'root'#'%';
+--------------------------------------------------------------------------------------------------------------------------------+
| Grants for root#% |
+--------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY PASSWORD '*A49656EC00D74D3524072F3452C1FBA7A1F3B561' WITH GRANT OPTION |
+--------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
Is it possible to grant permissions to leaf nodes so that this connection to x.x.x.x:3307 is successful as well?
I realize that it's probably not the way it's designed to be used, but I want to do it this way only for testing. It's convinient to debug when everything is in a single JVM, and I don't want to bother about Spark installation for now. I could install MemSQL locally to solve my problem, but I can't do this on Mac (is this right, BTW?).
Any help appreciated!
UPDATE: Just tried to connect locally on the server and still doesn't work:
ubuntu#ip-x-x-x-x:~$ memsql -P 3307 -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Password I'm providing is correct, on AWS it's an instance ID, so very hard to make a mistake.
This means that it wouldn't work even if I had Spark executor on the same instance with the leaf node. Feels like something is wrong with my setup, but I actually didn't change any settings, all are defaults.
Are master node and leaf node supposed to use the same credentials? Is there a way to setup them for the leaf separately?
That error means that the login was denied, i.e. incorrect username/password (not that the user doesn't have enough permissions). Make sure the password you are using in the spark connector matches the password on all the nodes.

DataStax Enterprise Portfolio Manager Demo Fails

I am trying to experiment with Spark and found this demo provided by DataStax:
http://docs.datastax.com/en/datastax_enterprise/4.5/datastax_enterprise/anaHome/anaPortfolioMgrSpark.html
I was able to run the pricer utility and data populated into created tables without any issues.
Then next command is to start the website which I did:
$ cd website
$ sudo ./start
And then open the url on the browser:
http://localhost:8983/portfolio
The website appears to load but a script error comes up:
fname is undefined
The initial error seems to stem from here:
if (mtype == Thrift.MessageType.EXCEPTION) {
var x = new Thrift.TApplicationException();
x.read(this.input);
this.input.readMessageEnd();
throw x;
}
The error is occurring in thrift.js and is shown in the screenshot below:
Any ideas on how to resolve this? I believe this is probably a bug (or limitation) that must be addressed by the DataStax team. I am using DSE on Ubuntu. My best guess is because I don't use localhost in my cassandra.yaml file and the website gives me no way to specify the IP of my database server. There appears to be a duplicate of the cassandra.yaml file in the website resource directory (maybe it uses that?) but it has the same settings as my primary cassandra.yaml file.

Cassandra Streaming error - Unknown keyspace system_traces

In our dev cluster, which has been running smooth before, when we replace a node (which we have been doing constantly) the following failure occurs and prevents the replacement node from joining.
cassandra version is 2.0.7
What can be done about it?
ERROR [STREAM-IN-/10.128.---.---] 2014-11-19 12:35:58,007 StreamSession.java (line 420) [Stream #9cad81f0-6fe8-11e4-b575-4b49634010a9] Streaming error occurred
java.lang.AssertionError: Unknown keyspace system_traces
at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:260)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:110)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:88)
at org.apache.cassandra.streaming.StreamSession.addTransferRanges(StreamSession.java:239)
at org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436)
at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:368)
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
at java.lang.Thread.run(Thread.java:745)
I got the same error while I was trying to setup my cluster, and as I was experimenting with different switches in cassandra.yaml, I restarted the service multiple times and removed the system dir under data directory (/var/lib/cassandra/data as mentioned here).
I guess for some reason cassandra tries to load system_traces keyspace and fails (the other dir under /var/lib/cassandra/data), and nodetool throws this error. You can just remove both system and system_traces before starting cassandra service, or even better delete all content of bommitlog, data and savedcache there.
This works obviously if you dont have any data just yet in the system.

Resources