I need to install Elasticsearch to path /opt/elasticsearch/ with all information in this path.
I mean, I need config path put in opt too.
https://www.elastic.co/guide/en/elasticsearch/reference/7.15/settings.html it says here I can set ES_PATH_CONF and ES_HOME env vars to change installation and config paths, but it doesn't work.
rpm --install elasticsearch-7.15.0-x86_64.rpm --prefix=/opt/elasticsearch/ it's not what I need and doesn't change config path.
It makes home directory in /opt/elasticsearch, and I get next structure and paths doesn't change. It still needs execution bins in /usr/share/elasticsearch/bin/
el6:~ # tree /opt/elasticsearch/ -d -L 3
/opt/elasticsearch/
├── lib
│ ├── sysctl.d
│ ├── systemd
│ │ └── system
│ └── tmpfiles.d
└── share
└── elasticsearch
├── bin
├── jdk
├── lib
├── modules
└── plugins
but i need
el5:~ # tree /opt/elasticsearch/ -d -L 1
/opt/elasticsearch/
├── bin
├── config
├── data
├── jdk
├── lib
├── logs
├── modules
└── plugins
with manually installation
mkdir /opt/elasticsearch/ && tar -xzf elasticsearch-7.15.0-linux-x86_64.tar.gz -C /opt/elasticsearch/ --strip-components 1
I have needed me structure. I made systemd service
[Unit]
Description=Elasticsearch
Documentation=https://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
Type=notify
RuntimeDirectory=elasticsearch
PrivateTmp=true
Environment=ES_HOME=/opt/elasticsearch
Environment=ES_PATH_CONF=/opt/elasticsearch/config
Environment=PID_DIR=/var/run/elasticsearch
Environment=ES_SD_NOTIFY=true
EnvironmentFile=-/etc/sysconfig/elasticsearch
WorkingDirectory=/opt/elasticsearch
User=elasticsearch
Group=elasticsearch
ExecStart=/opt/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535
# Specifies the maximum number of processes
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
# Allow a slow startup before the systemd notifier module kicks in to extend the timeout
TimeoutStartSec=5000
[Install]
WantedBy=multi-user.target
But it doesn't start, doesn't crash, and doesn't write any logs in journalctl.
How can I install elasticsearch in opt with configs in it?
You could install elasticsearch to /opt/elasticsearch using your rpmcommand, then move the config files from their default location to your location of choice, and finally change the ES_PATH_CONFand ES_HOME env vars to their respective new path.
When using the "manual" installation method (by downloading the .tar.gz) you have the freedom to put the files where you want. wget returns 404 because the file/URL does not exist. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.0-linux-x86_64.tar.gz should be the correct one. (you're missing -linux)
the only way to do that is to download the tar.gz into your directory and then manually add all the environment variables, and build and manage your own init script
Related
I ran into a problem during my research.
I have a firmware file that is downloaded of the internet and im trying to unpack it to emulate the firmware.
Good news is that i did it successfully once but i reverted my machine and i cant recreate the process now.
First of all the file cant be extracted by any tools because you will get an error that less than 2 layout blocks are found.
After that i dumped some info of the ubi file :
==> app_ubifs <==
1 named volumes found, 2 physical volumes, blocksize=0x20000
== volume b'bakfs' ==
-rw-r--r-- 1 0 0 37879808 2020-04-22 01:27:47 ubifs
So from the time i got this to succeed i know that in the volume bakfs there is another ubifs image inside that can successfully be extracted by public tools.
I have tested a lot of ways to mount this image but it always fails at mounting.
modprobe ubi
modprobe nandsim first_id_byte=0x20 second_id_byte=0xaa \
third_id_byte=0x00 fourth_id_byte=0x15
i believe this is the right config for blocksize=0x20000.
ubiformat /dev/mtd0 -f app_ubifs
ubiformat: mtd0 (nand), size 268435456 bytes (256.0 MiB), 2048 eraseblocks of 131072 bytes (128.0 KiB), min. I/O size 2048 bytes
libscan: scanning eraseblock 2047 -- 100 % complete
ubiformat: 2048 eraseblocks have valid erase counter, mean value is 0
ubiformat: flashing eraseblock 282 -- 100 % complete
ubiformat: formatting eraseblock 2047 -- 100 % complete
Also formatting and flashing works fine.
After this the next part i really don't understand.
There are 100 different ways online and i cant seem to get it to work.
I would appreciate it if someone could help me in the process.
As i said i already have the unpacked version with the filesystem.
But i cant recreate the unpacking process now.
So i know its possible.
---- solution
modprobe nandsim first_id_byte=0x2c second_id_byte=0xac third_id_byte=0x90 fourth_id_byte=0x15
Make the device for blocksize=0x20000.
Check if it is set-up.
cat /proc/mtd
lets clean it.
flash_erase /dev/mtd0 0 0
Now format and flash the image.
ubiformat /dev/mtd0 -f image.ubi -O 2048
Then attach the device.
modprobe ubi
ubiattach -p /dev/mtd0 -O 2048
And now i can mount it.
mount -t ubifs /dev/ubi0_X /mnt/ubifs
In my case it was ubi0_1 make sure to check this at /dev.
Another quick alternative to access the files inside the image, in case nandsim module is not available for the current kernel (for a debian based-OS):
apt install liblzo2-dev
pip install python-lzo ubi_reader
Then in the same folder where the ubifs image is localted, executeubireader_extract_files ubifs.img and there you go:
├── ubifs.img
└── ubifs-root
└── 705623055
└── rootfs
├── bin
├── boot
├── dev
├── etc
├── home
├── lib
├── linuxrc -> /bin/busybox
├── media
├── mnt
├── opt
├── proc
├── sbin
├── Settings
├── srv
├── sys
├── tmp -> /var/tmp
├── usr
├── var
└── www
I have recently started working on Cassandra everything was well documented and easy to understand so far.
However I am unable to find any answer to the following question:
Why do Cassandra data directory (/var/lib/cassandra/data/ks) have multiple subdirectories for the same SSTable?
At why what point is the new directory is created?
[centos#cs1 2017-06-03--19-46-14 cassandra $] ls -l /var/lib/cassandra/data/ks
total 8
drwxr-xr-x. 3 root root 4096 Jun 3 19:46 events-4f35e2c0482911e79119511599d22fe7
drwxr-xr-x. 3 root root 4096 Jun 3 19:41 events-7a34c34047f411e7aee3b9dc2549db1c
[centos#cs1 2017-06-03--19-46-10 cassandra $] tree
.
├── events-4f35e2c0482911e79119511599d22fe7
│ ├── ks-events-ka-4-CompressionInfo.db
│ ├── ks-events-ka-4-Data.db
│ ├── ks-events-ka-4-Digest.sha1
│ ├── ks-events-ka-4-Filter.db
│ ├── ks-events-ka-4-Index.db
│ ├── ks-events-ka-4-Statistics.db
│ ├── ks-events-ka-4-Summary.db
│ ├── ks-events-ka-4-TOC.txt
│ └── snapshots
└── events-7a34c34047f411e7aee3b9dc2549db1c
└── snapshots
└── 1496472654574-device_log
└── manifest.json
5 directories, 9 files
I noticed that flushing or compacting does not create new directory. It Simply adds/compacts the most recent SSTable directory
When you drop a table, by default Cassandra takes a snapshot to prevent data-loss if it was unintended. In your case, the events-7a34c34047f411e7aee3b9dc2549db1c is the older table and it has only snapshot directory in it.
The Cassandra.yaml parameter responsible for that action is as follows
auto_snapshot (Default: true) Enable or disable whether a snapshot
is taken of the data before keyspace truncation or dropping of tables.
To prevent data loss, using the default setting is strongly advised.
If you set to false, you will lose data on truncation or drop.
Remember to clean up the older table snapshots in production like environments, otherwise it could easily pile up on the data directory size.
If you drop a keyspace (ks as in my case) it does not deletes the keyspace directory (/var/lib/cassandra/data/ks) from the filesystem (auto_snapshot:true). That was the reason I was still seeing old - directory/s.
In cassandra when you drop a table, it's directory is remain in KeySpace directory.
In your case, it seems you created a table with name of the table you dropped before. because of that you have one table with two directory that one of those is useless, and you can rm -rf its directory or run nodetool clearsnapshot.
When I run arangod like this:
arangod --supervisor --daemon --pid-file some_file \
--configuration some_file.conf \
single_instance
then
--database.directory
option is ignored and
/var/tmp/single_instance
directory containing
├── single_instance_db
├── databases
├── journals
├── LOCK
├── rocksdb
├── SERVER
└── SHUTDOWN
is created.
But when I run arangod like this:
arangod --supervisor --daemon --pid-file some_file \
--configuration some_file.conf
then
--database.directory
option is honoured.
Why ?
(Is it some bug ?)
One of the things arangod prints during the startup for me is:
changed working directory for child process to '/var/tmp'
This is done by the supervisor before it forks the child.
Since you gave it the relative directory single_instance, this database directory is created in the then current working directory /var/tmp/ resulting in a total database directory of /var/tmp/single_instance.
If you want a specific directory in that situation, you should specify an absolute path like /var/lib/arangodb/single_instance.
I have a simple Puppet environment, just started with one master and one agent.
I am getting following error when I do puppet module list from my agent. I run puppet agent -t it is not even going to my site.pp and test.pp.
I am not sure if I am missing anything in the Puppet configurations.
puppet module list
/usr/lib/ruby/site_ruby/1.8/puppet/environments.rb:38:in `get!': Could not find a directory environment named 'test' anywhere in the path: /etc/puppet/environments. Does the directory exist? (Puppet::Environments::EnvironmentNotFound)
from /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:365:in `run'
from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:146:in `run'
from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:92:in `execute'
from /usr/bin/puppet:8
Here is my Puppet master puppet.conf
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
dns_alt_names = cssdb-poc-01.cisco.com cssdb-poc-01
[master]
server = cssdb-poc-01.cisco.com
certname = cssdb-poc-01.cisco.com
dns_alt_names = cssdb-poc-01.cisco.com cssdb-poc-01
environmentpath = /etc/puppet/environments
environment = test
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
~
Here is the directory structure on puppet master.
[root#cssdb-poc-01 test]# tree /etc/puppet/environments/
/etc/puppet/environments/
├── example_env
│ ├── manifests
│ ├── modules
│ └── README.environment
├── production
└── test
├── environment.conf
├── manifests
│ └── site.pp
└── modules
└── cassandra
├── manifests
└── test.pp
Here is the my puppet agent puppet.conf
cat /etc/puppet/puppet.conf
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
[main]
server=cssdb-poc-01.cisco.com
environmentpath = /etc/puppet/environments
environment = test
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
The issue was with my environment.conf file.
[root#cssdb-poc-01 templates]# cat /tmp/environment.conf
modulepath = /etc/puppet/environments/test/modules:$basemodulepath
manifest = manifests
I removed it from environment directory and it started working, not puppet modules list but puppet agent -t
#Frank you are right puppet modules list will not work on agent nodes.
Thanks for your help.
Custom modules will not show up in puppet modules list output. It lists modules with metadata, typically installed from the Forge using puppet module install.
On the agent, it is normal to have no local environments to search for modules (or install them).
I am new to puppet and would like to avoid some of the common issues that I see and get away from using import statements since they are being deprecated. I am starting with very simple task of creating a class that copies a file to a single puppet agent.
So I have this on the master:
/etc/puppet/environments/production
/etc/puppet/environments/production/modules
/etc/puppet/environments/production/mainfests
/etc/puppet/environments/production/files
I am trying to create node definitions in a file called nodes.pp in the manifests directory and use a class that I have defined (class is test_monitor) in a module called test:
node /^web\d+.*.net/ {
include test_monitor
}
However when I run puppet agent -t on the agent I get :
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class test_monitor for server on node server
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
What is the proper way to configure this to work. I would like to have node definitions in a file or files which can have access to classes I build in custom modules.
Here is my puppet.conf:
[main]
environmentpath = $confdir/environments
default_manifest = $confdir/environments/production/manifests
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
factpath=$vardir/lib/facter
[master]
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
I know this is probably something stupid that I am not doing correctly or have mis-configured but I cant seem to get it to work. Any help is appreciated!! To be clear I am just trying to keep things clean and have classes in separate files with specific node types also in their own files. I have a small to medium to size environment. (approx 150 servers in a data center)
Let me guess, maybe the test module has wrong structure. You need some subfolders and files under folder modules
└── test
├── files
├── manifests
│ ├── init.pp
│ └── monitor.pp
└── tests
└── init.pp
I recommend change from test_monitor to test::monitor, it makes sense for me, if you need use test_monitor , you need a test_monitor module or test_monitor.pp file.
node /^web\d+.*.net/ {
include test::monitor
}
Then put monitor tasks in monitor.pp file
And that was as simple as adding the proper module path to puppet.conf
basemodulepath = $confdir/environments/production/modules