Hi i am new to the internals of ELK stack
running a logstash process in background, and when it got picked up the matching file pattern, it says as in below
i want to understand here what is the importance of path.data option, Please help me out
[FATAL][logstash.runner] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
path.data directory is used by Logstash and its plugins for any persistent needs to store data, and it need to be different for each instance you are running, since Logstash does not allow multiple instances to share the same path.data.
By default its value is set to, LOGSTASH_HOME/data, which, under debian and rpm is /usr/share/logstash/data, and it is automatically assigned to first logstash instance unless explicitly specified.
If you want to run multiple logstash instances, you need to define the path.data either by command,
bin/logstash -f <config_file.conf> --path.data PATH
(make sure the directory is writable)
or specify in logstash.yml file under /etc/logstash/ for each instance.
That means you have two Logstash instances running and they cannot share the same data directory. You either need to kill the other instance or if you really want to have two instances running, you need to configure them to have different data directories.
Inside logstash.yml, you need to change the path.data setting for each instance.
Related
I'm setting up an environment which could contain multiple docker container. Each container contains the same SpringBoot Application. During the runtime of the SpringBoot application an .ini-file is needed to work through different things. Furthermore the .ini-file might be updated from outside the containers. This update or new .ini-file should be distributed among all other containers so that it is available at the other SpringBoot apps at the end. Distributing the file is not the problem at this point but how to store the file because the classpath can't be used.
I'm using hazelcast to use its cluster feature. With the help of it I'm able to distribute the new file over all other members in the cluster. At the beginning I stored the .ini-file within the classpath. But if the .ini-file changes it makes no sense to have it in the classpath because you cannot write within a jar. Also, if the container goes down, the memory of the hazelcast is lost because it only has a in-memory database.
What I expect is a process where I can easily substitute the .ini-file. For example a container already knows the file (all newer versions of the .ini-file will have the same name) during build or something like that. If the container was down, it is able to find the file by itself again. And, as I already mentioned, I need to change the .ini-file during runtime. Then the container, or to be more specific, the SpringBoot app has to recognize this change automatically. In my opinion a changing of the file could be done via a REST call which stores the file anywhere within the container or a place where it is allowed to write because classpath doesn't work.
As your question is holding a tag "kubernetes", I will try to answer you in context of this specific container orchestrator.
The feature you are looking for is called ConfigMap in Kubernetes.
Think of it as a key-value pairs created from data source (in your case ini config file).
kubectl create configmap game-config --from-file=.ini-file
You can then use ConfigMap data in two ways inside your containers:
As a container environment variables
As a populated Volume, mounted inside container under specific path
Important thing to note here is, that mounted ConfigMaps are updated automatically. If you are interested in this concept please read more about it here.
I have a mongodb docker container (stock one downloaded from the docker repo). Its log size is unconstrained (/var/lib.docker/containers/'container_id'/'container_id'-json.log)
This recently caused a server to fill up so I discovered I can instruct the docker daemon to limit the max size of a container's log file as well as the number of log files it will keep after splitting. (Please forgive the naiveté. This is a tools environment so things get set up to serve immediate needs with an often painful lack of planning)
Stopping the container is not desirable (though it wouldn't bring about the end of the world) thus doing so is probably a suitable plan G.
Through experimentation I discovered that running a different instance of the same docker image and including --log-opt max-size=1m --log-opt max-file=3 in the docker run command accomplishes what I want nicely.
I'm given to understand that I can include this in the docker daemon.json file so that it will work globally for all containers. I tried adding the following to the file "/etc/docker/daemon.json"
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Then I sent a -SIGHUP to the daemon. I did observe that the daemon's log spit out something about reloading the config and it mentioned the exact filepath at which I made the edit. (Note: This file did not exist previously. I created it and added the content.) This had no effect on the log output of the running Mongo container.
After reloading the daemon I also tried instantiating the different instance of the Mongo container again and it too didn't observe the logging directive that the daemon should have. I saw its log pass the 10m mark and keep going.
My questions are:
Should there be a way for updates to logging via the daemon to affect running containers?
If not, is there a way to tell the container to reload this information while still running? (I see docker update but this doesn't appear to be one of the config options that can be updated.
Is there something wrong with my config. I tested including a nonsensical directive to see if mistakes would fail silently and they did not. A directive not in the schema raised an error in the daemon's log. This indicates that the content I added (displayed above) is, at least, expected, though possibly incomplete or something. The commands seem to work in the run command but not in the config. Also, I initially tried including the "3" as a number and that raised an error too that disappeared when I stringified it.
I did see in the file "/var/lib.docker/containers/'container_id'/hostconfig.json" for the different instance of the Mongo container in which I included the directives in its run command that these settings were visible. Would it be effective/safe to manually edit this file for the production instance of the Mongo container to match the different proof of concept container's config?
Please see below some system details:
Docker version 1.10.3, build 20f81dd
Ubuntu 14.04.1 LTS
My main goal is to understand why the global config didn't seem to work and if there is a way to make this change to a running container without interrupting it.
Thank you, in advance, for your help!
This setting will be the new default for newly created containers, not existing containers even if they are restarted. A newly created container will have a new container id. I stress this because many people (myself included) try to change the log settings on an existing container without first deleting that container (they've likely created a pet), and there is no supported way to do that in docker.
It is not necessary to completely stop the docker engine, you can simply run a reload command for this change to apply. However, some methods for running docker, like the desktop environments and Docker in Docker based installs, may require a restart of the engine when there is no easy reload option.
This setting will limit the json file to 3 separate 10 meg files, that's between 20-30 megs of logs depending on where in the file the third log happens to be. Once you fill the third file, the oldest log is deleted, taking you back to 20 megs, a rotation is performed in the other logs, and a new log file is started. However json has a lot of overhead, approximately 50% in my testing, which means you'll get roughly 10-15 megs of application output.
Note that this setting is just the default, and any container can override it. So if you see no effect, double check how the container is started to verify there are no log options being passed there.
Changing the daemon.json for running containers did not work for me. Reloading the daemon and restarting the docker after editing the /etc/docker/daemon.json worked but only for the new containers.
docker-compose down
sudo systemctl daemon-reload
sudo systemctl restart docker
docker-compose up -d
i was wondering if you help me out here;
am trying to run multiple elasticsearch processes on the same (CentOS) server, but i have been un-successful so far.
and i have not enabled the service wrapper. and Elasticsearch has been installed using the .rpm package
the requirements are:
every instance belongs to a different cluster (cluster.name)
every instance uses a different port, 9201, 9202, 9203, etc.
every instance should be parameterised with different ES_HEAP_SIZE
the elasticsearch.yml file is attached where all parameters are described.
and the questions are:
how to set a different configuration file per instance when Des.config seems to be deprecated in 2.2
how to set a custom ES_HEAP_SIZE (-Xmx=24G -Xms=24G) when
# bin/elasticsearch -Des.config=config/IP-spotlight.RRv4/elasticsearch.yml [2016-02-14 19:44:02,858][INFO ][bootstrap ] es.config is no longer supported. elasticsearch.yml must be placed in the config directory and cannot be renamed.
please help ..
You have two solutions:
download elasticsearch archive from the site and run it from different paths with different configs. You can monitor each running instance with a method like supervisor. The main page for Elasticsearch downloads is here
run each instance inside a docker container. This is the right way to do, because it is easier to deploy and manage. You can find a Elasticsearch docker image here
I am sharing a linux box with some coworkers, all of them developing in the mesos ecosphere. The most convenient way to test a framework that I am hacking around with commonly is to run mesos-local.sh (combining both master and slaves in one).
That works great as long as none of my coworkers do the same. As soon as one of them did use that shortcut, no other can do that anymore as the master specific temp-files are stored in /tmp/mesos and the user that ran that instance of mesos will have the ownership of those files and folders. So when another user tries to do the same thing something like the following will happen when trying to run any task from a framework;
F0207 05:06:02.574882 20038 paths.hpp:344] CHECK_SOME(mkdir): Failed
to create executor directory
'/tmp/mesos/0/slaves/201402051726-3823062160-5050-31807-0/frameworks/201402070505-3823062160-5050-20015-0000/executors/default/runs/d46e7a7d-29a2-4f66-83c9-b5863e018fee'Permission
denied
Unfortunately, mesos-local.sh does not offer a flag for overriding that path whereas mesos-master.sh does via --work_dir=VALUE.
Hence the obvious workaround is to not use mesos-local.sh but master and slave as separate instances. Not too convenient though...
The easiest workaround for preventing that problem, no matter if you run mesos-master.sh or mesos-local.sh is to patch the environment setup within bin/mesos-master-flags.sh.
That file is used by both, the mesos-master itself as well as mesos-local, hence it is the perfect place to override the work-directory.
Edit bin/mesos-master-flags.sh and add the following to it;
export MESOS_WORK_DIR=/tmp/mesos-"$USER"
Now run bin/mesos-local.sh and you should see something like this in the beginning of its log output;
I0207 05:36:58.791069 20214 state.cpp:33] Recovering state from
'/tmp/mesos-tillt/0/meta'
By that all users that patched their mesos-master-flags.sh accordingly will have their personal work-dir setup and there is no more stepping on each others feet.
And if you prefer not to patch any files, you could just as well simply prepend the startup of that mesos instance by setting the environment variable manually:
MESOS_WORK_DIR=/tmp/mesos-foo bin/mesos-local.sh
I have created a instance while initializing accumulo by calling accumulo init
But now i want to remove that instance and as well i want to create a new instance.
Can any one help to do this?
Remove the directory specified by the instance.dfs.dir property in $ACCUMULO_HOME/conf/accumulo-site.xml from HDFS.
If you did not specify an instance.dfs.dir in accumulo-site.xml, the default is "/accumulo".
You should then be able to call accumulo init with success.
I don't have enough rep to answer your follow-up comment.
Make sure you restarted the monitor process since reiniting. Furthermore, make sure the node you're running your monitor on has the same configuration as the rest of the instance, as well as the same configuration for Hadoop on the classpath based on HADOOP_HOME.