I have a memsql cluster with 1 master and 4 leaf node.
I have a problem my master node is not running but it is connected in the cluster. And i can read and write a data to my cluster.
while trying to restart the master node its showing some error.
2018-03-31 20:54:22: Jb2ae955f6 [ERROR] Failed to connect to MemSQL node BD60BED7C8082966F375CBF983A46A9E39FAA791: ProcessHandshakeResponsePacket() failed. Sending back 1045: Access denied for user 'root'#'xx.xx.xx.xx' (using password: NO)
ProcessHandshakeResponsePacket() failed. Sending back 1045: Access denied for user 'root'#'10.254.34.135' (using password: NO)
Cluster status
Index ID Agent Id Process State Cluster State Role Host Port Version
1 BD60BED Afb08cd NOT RUNNING CONNECTED MASTER 10.254.34.135 3306 5.8.10
2 D84101F A10aad5 RUNNING CONNECTED LEAF 10.254.42.244 3306 5.8.10
3 3D2A2AF Aa2ac03 RUNNING CONNECTED LEAF 10.254.38.76 3306 5.8.10
4 D054B1C Ab6c885 RUNNING CONNECTED LEAF 10.254.46.99 3306 5.8.10
5 F8008F7 Afb08cd RUNNING CONNECTED LEAF 10.254.34.135 3307 5.8.10
That error means that while the node is online, memsql-ops is unable to log in to the node, most likely because the root user's password is misconfigured somewhere in the system - memsql-ops is configured with no password for that node, but likely the memsql node does have a root password set.
Did you set a root password in memsql? Are you able to connect to the master node directly via mysql client?
If yes, you can fix this by logging in to the memsql master node directly and changing the root password to blank:
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' identified by '' WITH GRANT OPTION
Then, after ensuring that connectivity is restored, you can update the root password in the future with the command https://docs.memsql.com/memsql-ops-cli-reference/v6.0/memsql-update-root-password/.
Related
I am trying to setup a multi-node multi-datacenter cluster in Cassandra 3.11
For data-center 1 I have Cassandra running on 3 nodes(eg. 10.90.22.11, 10.90.22.12 and 10.90.22.13) and for data-center 2 I have Cassandra running on 2 nodes(eg. 10.90.22.21 and 10.90.22.22).
The ring is up but they are working separately. To make them work together I update the endpoint_snitch to be GossipingPropertyFileSnitch and also the dc and rac in cassandra-rackdc.properties to be DC1 and DC2 for respective nodes following the steps mentioned in this link.
After these changes when I restart Cassandra, the status of Cassandra is running however when I check for the ring with nodetool status I receive a error:
nodetool: Failed to connect to '127.0.0.1:7199'
ConnectException: 'Connection refused (Connection refused)'
What am I missing?
This error you posted indicates that nodetool couldn't connect to JMX that is supposed to be listening on port 7199:
Failed to connect to '127.0.0.1:7199'
Verify that Cassandra is running and check that the process is bound to various ports including 7199, 9042 and 7000. You can try running one of these commands:
$ netstat -tnlp
$ sudo lsof -nPi | grep LISTEN | grep java
Cheers!
You should try nodetool command with host/IP what you have put in your cassandra.yaml. Also, you should check your port 7199 or custom port if you set is open/allow from firewall.
nodetool -h hostname/ip status.
you can mention username.password if you enabled. please refer below link for more details and understanding:-
http://cassandra.apache.org/doc/latest/tools/nodetool/status.html
Installed openproject according to the CentOS 7 docs (https://www.openproject.org/download-and-installation/#installation) and after running sudo openproject configure, getting the error output as shown:
➜ ~ sudo openproject configure
Launching installer for openproject...
Selected addons: legacy-installer mysql apache2 repositories smtp memcached openproject
[legacy-installer] ./bin/configure
[mysql] ./bin/configure
DONE
[apache2] ./bin/configure
DONE
[repositories] ./bin/configure
DONE
[smtp] ./bin/configure
DONE
[memcached] ./bin/configure
DONE
[openproject] ./bin/configure
[legacy-installer] ./bin/preinstall
[mysql] ./bin/preinstall
[apache2] ./bin/preinstall
Note: Forwarding request to 'systemctl enable httpd.service'.
[repositories] ./bin/preinstall
[smtp] ./bin/preinstall
[memcached] ./bin/preinstall
No memcached server to install. Skipping.
[openproject] ./bin/preinstall
[legacy-installer] ./bin/postinstall
[mysql] ./bin/postinstall
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Never used MariaDB before, but from some basic checking (installed following https://www.digitalocean.com/community/tutorials/how-to-reset-your-mysql-or-mariadb-root-password)
➜ ~ mysqladmin -u root -p version
Enter password:
mysqladmin Ver 9.0 Distrib 5.5.60-MariaDB, for Linux on x86_64
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Server version 5.5.60-MariaDB
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/lib/mysql/mysql.sock
Uptime: 21 min 38 sec
Threads: 1 Questions: 16 Slow queries: 0 Opens: 7 Flush tables: 2 Open tables: 25 Queries per second avg: 0.012
(where the passsword is just blank) the DB appears to be working and accessible. Removing and re-installing the mariadb-server package does not appear to change the behavior either. Never used openproject or mysql / mariadb before, so any advice on what could be happening here would be appreciated.
Did you choose Install and configure MySQL server locally during the configuration step?
If so, the package should automatically configure a random MySQL root password that is stored under the key mysql/root_password at /etc/openproject/installer.dat. If you can still access the database with an empty password, the setup failed to set a password correctly. It appears to take input from urandom.
What you could try is running the following command while hitting enter in the MySQL password prompt:
mysqladmin -u root -p password "<Create and insert a random password here>"
And then replacing the value at mysql/root_password at /etc/openproject/installer.dat.
On the same host as master to memsql-deploy leaf node always failed with same error. Switching the operation to new machines has the same failure.
Here is the steps to deploy master role:
# memsql-ops memsql-deploy -a Af53bfb -r master -P 3306 --community-edition
2017-03-24 16:15:54: Je5725b [INFO] Deploying MemSQL to 172.17.0.3:3306
2017-03-24 16:15:59: Je5725b [INFO] Installing MemSQL
2017-03-24 16:16:02: Je5725b [INFO] Finishing MemSQL Install
Waiting for MemSQL to start...
MemSQL successfully started
Here is the immediate steps to add leaf node after deploying master:
# memsql-ops memsql-deploy -a Af53bfb -r leaf -P 3308
2017-03-24 16:16:43: J32c71f [INFO] Deploying MemSQL to 172.17.0.3:3308
2017-03-24 16:16:43: J32c71f [INFO] Installing MemSQL
2017-03-24 16:16:46: J32c71f [INFO] Finishing MemSQL Install
Waiting for MemSQL to start...
MemSQL failed to start: Failed to start MemSQL:
set_mempolicy: Operation not permitted
setting membind: Operation not permitted
What can be the possible reasons behind the error messages and what way that I can follow to find out the root cause or fix?
After one day search on Google, I believe I finally locate the root cause of this error. I feel strange why no one asked before because it should be happened more often than just me.
The real cause for this issue is I installed numactl package per MemSQL's best practice suggestion on a non-NUMA machine. This would effectively let the memsql node other than the first one try to run numactl sub-command set_mempolicy to bind individual MemSQL nodes to CPUs but this command would eventually fails. And the start of the node by sub-commands memsql-start or memsql-deploy from memsql-ops will all fail.
The workaround to this is very simple, just remove the package numactl. Then everything will be fine. This workaround particularly applies to some virtualization based memsql deployments like Docker.
Can you try on the master:
memsql-ops start
memsql-ops memsql-deploy --role master -P 3306 --community-edition
On the agent:
memsql-ops start
memsql-ops follow -h <host of primary agent> -P <port of primary agent if configured to use one>
memsql-ops memsql-deploy --role leaf -P 3308 --community-edition
Tried installing kubernetes v1.2.0 on azure environment but after installation cannot access kube apis at port 8080.
Following services are running :
root 1473 0.2 0.5 536192 42812 ? Ssl 09:22 0:00 /home/weave/weaver --port 6783 --name 22:95:7a:6e:30:ed --nickname kube-00 --datapath datapath --ipalloc-range 10.32.0.0/12 --dns-effective-listen-address 172.17.42.1 --dns-listen-address 172.17.42.1:53 --http-addr 127.0.0.1:6784
root 1904 0.1 0.2 30320 20112 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-proxy --master=http://kube-00:8080 --logtostderr=true
root 1907 0.0 0.0 14016 2968 ? Ss 09:22 0:00 /bin/bash -c until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done
root 1914 0.2 0.3 35888 22212 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080
root 3129 2.2 0.3 42488 25192 ? Ssl 09:27 0:00 /opt/kubernetes/server/bin/kube-controller-manager --master=127.0.0.1:8080 --logtostderr=true
curl -v http://localhost:8080 returns error
Rebuilt URL to: http://localhost:8080/
Trying 127.0.0.1...
connect to 127.0.0.1 port 8080 failed: Connection refused
Failed to connect to localhost port 8080: Connection refused
Closing connection 0 curl: (7) Failed to connect to localhost port 8080: Connection refused
Same works fine with v1.1.2.
I'm using following guidelines https://github.com/kubernetes/kubernetes/tree/master/docs/getting-started-guides/coreos/azure and updated line https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml#L187 to user version v1.2.0.
The services you show running do not include the apiserver. For a quick breakdown I can explain what each service does that you show running.
Weave: This is a software overlay network and assigns IP addresses to your pods.
kube-proxy: This runs on your worker nodes allow pods to run and route traffic between exposed services.
kubectl create: Kubectl is actually the management cli tool but in this case using -f /etc/kubernetes/addons/; sleep 2 is watching the /etc/kubernetes/addons/ folder and automatically creating any objects (pods, replication controllers, services, etc.) that are put in that folder.
kube-scheduler: Responsible for scheduling pods onto nodes. Uses policies and rules.
kube-controller-manager: Manages the state of the cluster by always making sure the current state and desired state are the same. This includes starting/stopping pods and creating objects (services, replication-controllers, etc) that do not yet exist or killing them if they shouldn't exist.
All of these services interact with the kube-apiserver which should be a separate service that coordinates all of the information these other services use. You'll need the apiserver running in order for all of the other components to do their jobs.
I won't go into the details of getting it running in your environment but from it looks like in the comments on your original thread you found some missing documentation to get it running.
I am running a 6 node cluster of cassandra 1.2 on an Amazon Web Service VPC with Oracle's 64-bit JVM version 1.7.0_10.
When I'm logged on to one of the nodes (ex. 10.0.12.200) I can run nodetool -h 10.0.12.200 status just fine.
However, if I try to use another ip address in the cluster (10.0.32.153) from that same terminal I get Failed to connect to '10.0.32.153:7199: Connection refused'.
On the 10.0.32.153 node I am trying to connect to I've made the following checks.
From 10.0.12.200 I can run telnet 10.0.32.153 7199 and I get a connection, so it doesn't appear to be a security group/firewall issue to port 7199.
On 10.0.32.153 if I run netstat -ant|grep 7199 I see
tcp 0 0 0.0.0.0:7199 0.0.0.0:* LISTEN
so cassandra does appear to be listening on the port
The cassandra-env.sh file on 10.0.32.153 has all of the JVM_OPTS for jmx active
-Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
The only shot in the dark I've seen while trying to solve this problem while searching the interwebs is to set the following:
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=10.0.32.153"
But when I do this I don't even get a response. It just hangs.
Any guidance would be greatly appreciated.
The issue did end up being a firewall/security group issue. While it is true that the jmx port 7199 is used, apparently other ports are used randomly for rmi. Cassandra port usage - how are the ports used?
So the solution is to open up the firewalls then configure the cassandra-env.sh to include
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<ip>