I've an ELK stack 7.6.2 with logstash, an elasticsearch cluster with 3 nodes and kibana. I would like to add security but the only doc I can fin always start 'from scratch' I would like to have an example on an already running cluster in order not te mess up with it. Thanks for your help.
Guillaume
You can not enable security features on an already running cluster. Security-settings are classified as static, meaning that they can not be dynamically updated on the fly:
static:
These settings must be set at the node level, either in the elasticsearch.yml file, or as an environment variable or on the command line when starting a node. They must be set on every relevant node in the cluster.
dynamic:
These settings can be dynamically updated on a live cluster with the cluster-update-settings API.
See https://www.elastic.co/guide/en/elasticsearch/reference/7.6/modules.html for reference and for all settings that can be dynamically updated (you won't find security settings there).
Also, from this guide (https://www.elastic.co/guide/en/elasticsearch/reference/current/get-started-enable-security.html) one can tell that you need to stop your running elasticsearch and kibana instances in order to enable security.
I hope I could help you.
Related
I've been asked to configure an ELK stack, in order to manage log from several applications of a certain client. I have the given stack hosted and working at a redhat 7 server (followed this cookbook), and a test instance in a virtual machine with Ubuntu 16.04 (followed this other cookbook), but I've hit a roadblock and cannot seem to get through it. Kibana is rather new for me and maybe I don't fully understand the way it works. In addition, the most important application of the client is a JHipster managed application, another tool I am not familiarized.
Up until now, all I've found about jhipster and logstash tells me to install the full ELK stack using Docker (which I haven't, and would rather avoid in orther to keep the configuration I've already made), so that Kibana deployed through that method already has configured a dashboard tunned for displaying the information that the application will send with the native configuration, activated in the application.yml logstash: enabled: true.
So... my questions would be... Can I get that preconfigured jhipster dashboard imported in my preexistent Kibana deploy. Where is the data, logged by the application, stored? can I expect a given humanly readable format? Is there any other way of testing the configuration is working, since I don't have any traffic going through the test instance into the VM?
Since that JHipster app is not the only one I care for, I want other dashboards and inputs to be displayed from other applications, most probably using file beat.
Any reference to useful information is appreciated.
Yes you can. Take a look at this repository: https://github.com/jhipster/jhipster-console/tree/master/jhipster-console
there are the exports (in JSON format) from kibana stored in the repository, along with the load.sh
The scripts adds the configuration by adding them via the API. As you can imply, any recent dashboard is not affected by this, so you can use your existing configuration.
As per apache doc "http://spark.apache.org/docs/latest/monitoring.html"
spark.history.retainedApplications points to "The number of application UIs to retain. If this cap is exceeded, then the oldest applications will be removed"
But I see more than configured apps into the UI. Is it correct or it stores those many apps into memory only or load again into memory when needed. Please clarify. Thx
That setting specifically applies to the history server. If you don't have one started (it's typically used with YARN and Mesos I believe), then the setting you're after is spark.ui.retainedJobs. Check the Spark UI configuration parameters for more details.
These settings only apply to jobs, so in order to pass them to the master itself, check the spark.deploy options in the stand-alone deployment section. You can set them via the SPARK_MASTER_OPTS environment variable.
If you want to clean the data files produced by workers, check the spark.worker.cleanup options in the same section. You can set them via the SPARK_WORKER_OPTS environment variable on your workers.
this question might be a silly one but since i am new in hadoop and there are very few material available online which can be used as a reference point so i thought this might be the best place to ask this question .
i have successfully configured few computers in multi node configuration. during the setup process i have to change many hadoop file .now i am wondering can i use every single computer as an single node configuration with out changing any settings or hadoop file ?
You can make your each node as separate instance. But you have to modify the configuration files surely and restart all the instances.
You can do that
Follow below steps
Remove IP or Hostname from masters file
Remove IP's or hostname's from slaves file
change fs.defaultFS property IP address in core-site.xml
As well as Resource Manager IP
What is the best way to set yaml settings? I am using docker containers and want to automate the process of setting cassandra.yaml settings like seeds, listen_address & rpc_address.
I have seen something like this in other yaml tools: <%= ENV['envsomething'] %>
Thanks in advance
I don't know about the "best" way but when I set up a scripted cluster of cassandra servers on a few vagrant vms I used puppet to set the seed and so on in cassandra.yaml.
I did write some scripting than used puppetdb to keep track of the addresses of the hosts but this wasn't terrifically successful. The trouble was the node that came up first only had itself in the list of seeds and so tended to make a cluster on it's own. Then the rest would come up as a seperate cluster. So I had to take down the solo node, clear it out and restart it with correct config
If I did it now I would set the addresses as static ip, then use them to fill in the templates for the cassandra.yaml files on all the nodes. Then hopefully the nodes would come up with the right idea about the other cluster members.
I don't have any experience with Docker but they do say the way to use puppet+Docker is to use puppet on the Docker container before starting it up
Please note that you need a lot of memory to make this work. I had a machine with 16GB and that was a bit dubious.
Thank you for information.
I was considering using https://github.com/go-yaml/yaml
But this guy did the trick: https://github.com/abh1nav/docker-cassandra
Thanks
If you're running Cassandra in Docker use this as an example: https://github.com/tobert/cassandra-docker You can override cluster name/seeds when launching so whatever config management tool you use for deploying your containers could do something similar.
I am using puppet 3.2.3, passenger and apache on CentOS 6. I have 680 compute nodes in a cluster along with 8 gateways users use to log in to the cluster and submit jobs. All the nodes and gateways are under puppet control. I recently upgraded from 2.6. The master logs to syslog as desired, but how to change the log level for the master escapes me. I appear to have the choice of --debug, or nothing. Debug logs far too much detail, while not using that switch simply logs each time passneger/apache launch a new worker to handle incoming connections.
I find nothing in the on-line docs about doing this. What I want is to log each time a nodes hits the server; but I do not need to see the compiled catalogue, or resources in/var/log/messages.
How is this accomplished?
This is a hack, but here is how I solved the problem. In the file (config.ru) that passenger uses to launch puppet via rack middleware, which in my system lives in /usr/share/puppet/rack/puppetmasterd, I noticed these lines
require 'puppet/util/command_line'
run Puppet::Util::CommandLine.new.execute
So, this I edited to become
require 'puppet/util/command_line'
Puppet::Util::Log.level = :info
run Puppet::Util::CommandLine.new.execute
I suppose other choices for Log.level could be :warn and others.