I was wondering if kibana 6.5.0 supports the option to pointing to multiple elasticsearch nodes.
I have 5 elasticsearch nodes in a cluster setup and i want point a single kibana instance to those nodes (i do not want to use querying node or similar).
I tried using the tag in yaml file "elasticsearch.host" but only supports 1 elasticsearch URL.
Also i tried with the tag "elasticsearch.url" and "elasticsearch.urls" as specified in a specifyc section in elastic.io but it does not work... Basically it crashes.
Any idea if with this specyfic version of Kibana i can point to multiple cluster nodes? If so any example how you would use the tag?
Thank you.
It is not possible, this version does not support multiple hosts.
This feature was implemented in version 6.6.
To be able to point kibana to multiple hosts you will need to upgrade your stack.
Related
The set up is 3 nodes, two warm nodes with 5TBs of storage and a hot node with 2TB. I want to add 2TBs of storage to each of the two nodes.
Each node is run as a docker image on a Linux server which will be shutdown while adding the disks. I do not know how to make elasticsearch utilize the extra space after adding the disks.
No docker-compose files are used.
The elastic image is started without specifying volumes, but only specifies the elasticsearch yaml file. The elasticsearch file does not mention anything about the path properties.
You can use multiple data path, editing the yaml configuration file:
path:
data:
- /mnt/disk_1
- /mnt/disk_2
- /mnt/disk_3
In recent ElasticSearch versions, this option is deprecated.
See official documentation to migrate to an alternative configuration.
Building a Thingsboard cluster
I need help setting up a Thingsboard cluster, the documentation online is very limited.
The cluster will contain 2 Zookeeper nodes and 4 Thingsboard nodes with Cassandra DB.
Should Zookeeper be installed separately?
A step-by-step guide would be much appreciated!
I cannot provide you detailed step-by-step instructions to setup a ThingsBoard cluster. I can point you into the right direction by sharing the different documents you need to do so.
Bottom line, the following tasks must be completed:
Install and configure a ZooKeeper ensemble.
Check the ZooKeeper documentation for further installation details. Keep in mind that you need at least three different ZK-nodes in a clustered environment and that you always need an odd number of ZK nodes (3,5,7,...). It is a very very very bad idea to build a cluster consisting out of two ZK-nodes, check split brain condition that might appear under these circumstances! Basically you setup the number of individual nodes you wish to use and change the configuration file to enable the different nodes as an ensemble. This is documented quite well in the ZK-docs.
Install and configure a Cassandra cluster.
Again you will setup the number of individual nodes you need for your Cassandra cluster and modify the individual configuration files to convert them into a Cassandra cluster. Check Cassandra documentation for details. Be sure to check proper configuration using the nodetool status command as described at the end of the document. All your nodes should be up and running.
Install and configure a ThingsBoard cluster.
Use the instructions provided with ThingsBoard single node setup.
Install Java
Skip External database installation
ThingsBoard service installation
Configure ThingsBoard to use the external database - Cassandra
Go to Cluster setup and apply the configuration steps depicted (ZK, Cassandra and RPC). Keep in mind to point to ALL members of your ZK, Cassandra cluster. You can also use IP-addresses instead of host names.
Return to single node setup and run the installation script at ONE NODE only!
Start ThingsBoard service
If everything went well, you should be able to access your ThingsBoard nodes directly using the URL http://[NODE_IP]:8080. You can verify proper cluster operation by creating a tenant on one node and check its presence on another node.
I don't know if using an even number of ThingsBoard nodes is a good idea. The documentation does not mention anything about this.
One final remark, you could/should consider putting a proxy in front of your ThingsBoard cluster to provide load balancing to your web clients and improve user experience. This way you shouldn't share the individual host addresses with your users and you will prevent node overloading due to the fact that everybody is using the same web-address to access your dashboard(s). You could also proxy your MQTT broker to provide load balancing as well.
Good luck in setting up your cluster!
Zookeeper needs at least 3 nodes to run in a cluster mode. Each node voting and the valid replica count to gain the QUORUM is 3.
We are currently running a hazelcast cluster using it to communicate information on a queue to be picked up by a single node in the cluster. We are vulnerable however to a "rogue" node that joins the cluster but without the right version of software to handle the request in a way that's proper.
Is there a way proactively remove rogue nodes of this nature in a way that prevents them from actively re-joining the cluster? I haven't been able to see a way from the documentation.
It looks like you are using default hazelcast xml. You better need to have a custom hazelcast xml with updated Group credentials.
What will be the URL if there are 2 elastic search instances on the same machine, say for example a Logstash config file is redirecting to embedded elastic search and the other pointing to the external ES instance both on the same machine.
How to differentiate the two?
By default we are accessing the KIBANA using URL: http://:9292
So far, Kibana 3 do not support this, if this two elasticsearch instances are not at the same cluster. So, you have to setup two kibana for each elasticsearch instance.
You can follow this discussion about the feature you want.
I'm using the Cassandra CQL/JDBC driver I got from google code but it doesn't seem to let me provide a cluster name - is there a way?
I'm using cluster names to ensure I don't run commands against a live system, it has a different cluster name to my dev systems.
Edit: Just to clarify, I have two totally separate Cassandra clusters, one live and one for test. They have different cluster names to ensure that I don't accidentally run test code meant for the test cluster on the live cluster. Therefore any client I need to use must let me set a cluster name. Hector does this.
There is no inbuilt protection for checking cluster names for Cassandra clients. It is built to ensure nodes from different clusters don't try and join together but not to ensure clients connect to the right cluster. It would be possible to add this checking to a client though (since the cluster name is exposed to the client) but I'm not aware of any clients doing this.
I'd strongly recommend firewalling off your different environments to avoid this kind of mistake. If that isn't possible, you should choose different ports to avoid confusion. Change this with the 'rpc_port' setting in cassandra.yaml.
You'd have to mirror the data on two different clusters. You cant access the same cluster with different names.
To rename your cluster (from the default 'Test Cluster') you edit the cassandra configuration file found in location/of/cassandra/conf/cassandra.yaml. Its the top line, if you need more details look at the datastax configuration documentation and explanation.