How to run multiple logstash on same host - logstash

I want to run multiple logstash on same host but i dont know how make configure.
Who is step by step tell me for make configure ?
Thank you for interest.

You can tell logstash where to get the config file on the command line:
bin/logstash -f config1.conf
Start a second one with a different config file:
bin/logstash -f config2.conf
If you want to do this from startup, you would need to configure separate initd/systemd for each desired process.
Can you explain what your use case is? Logstash can handle multiple inputs and some pretty complex configurations with just one process.

Related

Thingsboard container doesn't update on config change

I am trying to update database config (parameters such as certain ports, db address etc.). Unfortunately only some ports change according to logs.
I am using following dockerfile
FROM thingsboard/tb-cassandra #repo link https://github.com/thingsboard/thingsboard/tree/master/msa/tb
RUN sed -i "s/:5683/:12090/g" /usr/share/thingsboard/conf/thingsboard.yml #this one works
RUN sed -i "s/localhost:5432/postgres:5432/g" /usr/share/thingsboard/conf/thingsboard.yml #this one doesnt update anything
What can cause such a behavior? Is there a way to deploy the solution

How to run multiple scripts in remote machine

I have to remotely connect to a gateway(working on Linux platform), inside which I have couple of executable files (signingModule.sh and taxModule.sh).
Now I want to write one script in my desktop which will connect to that gateway and run signingModule.sh and taxModule.sh in two different terminals.
I have written below code:
ssh root#10.138.77.150 #to connect to gateway
sleep 5
cd /opt/swfiscal/signingModule #path of both modules
./signingModule #executable.
But through this code I am able to connect my gateway but after connecting to gateway nothing is happening.
2nd code:
source configPath # file where i have given path of both the modules
cd $FCM_SCRIPTS # variable in which i have stored the path of modules
ssh root#10.138.77.150 'sh -' < startSigningModule** #to connect and run one module.
as an output of this i am getting:
-source: configPath: file not found
Please help me working this out. Thanks in advance.
Notes:
I can copy paste my files in that gateway if required.
Gnome-Terminal or any other alternatives of this is not working in my gateway
ssh root#10.138.77.150 "cd /opt/swfiscal/signingModule && ./signingModule"
Line source configPath doesn't work because you need specify full path to the file.
You can pass several commands to ssh to run them in sequence; but I prefer a different solution: I have whole scripts locally; and running them remotely means:
Using scp to copy my script to the remote system
Using ssh to then run the script on the remote system
The big advantage here: there is always a potential for getting things wrong (for example: quoting) when directly giving commands to ssh. But when you put everything into a script, you have exact/full control over what is going to happen. You can put things like "set -e" into your script to improve error handling ...
(and of course, you can also automate the two steps listed above!)

Run a command in remote server

What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.
It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.
Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'

How to start Logstash config-file

i have a problem with logstash when i want to start from my config file with the command"bin/logstash -f logstash-rt.conf"
this return:
Error: No config files found: logstash-rt.conf
Can you make sure this path is a logstash config file?
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.
can someone help me ?
Thanks
You need to put config file in logstash folder and the path should be of the logstash folder when you execute the command
so bin/logstash is the relative path when you run it where the conf file is.
Make a .config file and then paste it into the bin folder of Logstash and then use the below command in cmd to start it
logstash -f filename.config
Below is the command to load logstash.conf
sudo bin/logstash -f apache.config --config.reload.automatic --path /etc/logstash/conf.d/filter.conf

pppoe-server log file

I have installed successfully the roar-penguin pppoe-server and trying to use it without success, what I don't understand is that, when I put in my /etc/ppp/pppoe-server-options
debug
logfile /var/log/pppoe-server-log
But that file is not created and I don't understand what happens. It is really hard for me find a solution. Do you know how can I enable the debugging ?
My problem is that I catch every time (Wireshark sniffing) the
RP-PPPoE: Child pppd process terminated
In the PADT message, any help ?
Thanks in advance.
From your question not being formatted, verify that debug and logfile in /var/log/pppoe-server-log are on separate lines in your configuration file. Also, ensure that you've restarted the service to utilize the new configuration. If the service is not running as root, be sure that the user it runs as has ownership over the logfile to write to it. If it is running as root, ensure the file exists and that it's writable.
If it doesn't exist, just run:
# touch /var/log/pppoe-server-log
# chmod 0774 /var/log/pppoe-server-log
I would think this should be done automatically, but you may as well do so just to ensure it's created properly and you can verify ownership/permissions as needed.

Resources