I followed the instructions at https://www.digitalocean.com/community/tutorials/how-to-install-memsql-on-ubuntu-14-04
MemSQL Ops is running on http://my_server_ip:9000
Is there a way to password protect the MemSQL Ops interface so the outside world cannot access my database configuration?
Yes, you can use the memsql-ops command line interface to create a superuser in Ops that effectively password protects Ops. Once a superuser is defined, all Ops UI users will need to login before they can see any information.
http://docs.memsql.com/latest/ops/cli/SUPERUSER-ADD/
Learn more about the memsql-ops command line interface by visiting the MemSQL documentation or by SSHing into your host machine and running memsql-ops --help
Related
hyy folks,
i have setup cassandra gui using this doc https://docs.datastax.com/en/install/6.8/install/opscInstallRHEL.html
but as soon as i hit the url it will open this page
here i chosen manage existing clsuter and go to next page
there it is shown like this
there i paste my private ip
but as soon i hit next it showing this error
how can i add my cluster in opscenter
sudo service opscenterd status
it is showing in Running state
i entered username and password then it is showing this
OpsCenter only works with DataStax Enterprise.
You cannot use OpsCenter to connect to:
open-source Apache Cassandra,
other distributions which are forks of Cassandra, or
versions from cloud vendors which run a CQL API engine to make the database "look and feel" like Cassandra.
Cheers!
First Situation:
I have downloaded spark on ubuntu 14. But when am running it below result is coming. Its not able to start properly and asking root user password.
root#sueeze-Lenovo-G580:~/Spark/spark-2.3.0-bin-hadoop2.6/sbin# ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/sueeze/Spark/spark-2.3.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.master.Master-1-ashish-Lenovo-G580.out
root#localhost's password:
May I know please:
1) What is this ?
2) Why its happening ?
3) Why its asking root's password ?
Situation 2:
I have fixed situation one by adding localhost to in slave file.
Now both master and worker is running. I want to know what was missed but when I add localhost it started running ?
But it still asks me root's password when it starts worker node. I mean first it starts Master but when the moment it goes to worker to start it asks for root's password. Why ? And how can I fix that...?
I'm running spark only on my local machine so master and slave for both my local machine is available. Correct...?
Thanks
The launch scripts for the standalone cluster use SSH connections to log onto worker machines (whether that's on localhost or not).
That is why the script is prompting you for the password.
You need to configure password-less login on those machines using SSH keys to prevent the interactive prompts. In short, this is about allowing the command ssh localhost or ssh workermachinehost to normally go without prompting your for the password (use ssh-copy-id)
You should be able to use a non-root user, as long as you've created the user with the same name on both master and worker machines.
Check this documentation for information on running a standalone cluster.
The datastax has restricted the use of opscenter for enterprise users only. Is there any way or possibility that even an open source casandra user can have access to the opscenter? Kindly check the follwing image and let me know if there is any possibility of the usage or opscenter?
I am trying to connect it but gettng the following error:
No. However you could consider creating a monitor via the metrics exposed by JMX instead.
The servers we use have both public and private internal IPs. When using MemSQL Ops GUI to add MemSQL host and nodes, the installation defaults to using the public IP even when provided with the private IP.
How can we have the private IP used during installation? Or how can this IP be changed after the installation?
Tried using memsql-ops memsql-update-config to update the reported_hostname setting which confirms a successful change and says to restart. memsql-ops cluster-restart doesn't show any changes though.
Process that works:
Installing the agents through the command line through memsql-ops agent-deploy (after the initial ops install) then using memsql-ops restart on each node to restart with new interface and host bindings. Once the agent is restarted and showing the private IP, using the Ops UI to install the memsql node works fine.
The IP shown in Ops is not necessarily the IPs used by MemSQL the database.
The source of truth about the addresses used in cluster is the output of the SHOW LEAVES command.
http://docs.memsql.com/latest/ref/SHOW_LEAVES/
I've been working with Storm topologies and Cassandra databases for relatively short period of time. I recently realized that my development environment's spec is not strong enough for my testing, so I deployed a 3-node Cassandra cluster on Google Cloud instance. Now I'd like to let Storm topology (hosted on a separate box) to insert into Cassandra. Obviously, this feature is not enabled by default, and I'd like to have a guideline of how to, securely, open Cassandra for database queries from different IP in production scenario. ( I suspect that Google protects its instances with a firewall as well?)
Following Carlos Rojas's directions in THIS LINK, I could open the ports to access Cassandra from outside the network computer. Also, you can open ports in your firewall using this line :
gcutil addfirewall cassandra-rule --allowed="tcp:9042,tcp:9160" --network="default" --description="Allow external Cassandra Thrift/CQL connections" from THIS LINK