Hi i am new to Telegraf and Influxdb. I know that we can tail (monitor) a local file(on the same machine where Telegraf is installed) using Telegraf and send the output to Influxdb using [[inputs.tail]] and [[outputs.influxdb]] plugin's of Telegraf.
But I want to tail a log file which is on a different server other than where Telegraf is installed.
One way could be to have Telegraf on the server where the log file is : But I can't have that because that server can't send data to Influxdb . It does not have access to the server where Influxdb is present.
So I have to use intermediate server in order to send data to InfluxDb.
So is there a way to tail the remote file or any other way around.
Any type of suggestion's are welcome.
I looked around and found a solution how we can do it :
Telegraf's inputs.tail plugin have options to tail a pipe which we can use to monitor remote files.
Let's suppose serverA have the log File and serverB is where the Telegraf is running.
So i will write down the step's to monitor a remote file via Telegraf.
1.First create a pipe on serverB.
mkfifo pipeName
2.Now run a command on serverB which will do ssh to tail the log file on serverA which you want to monitor and send the output to the pipe on serverB.
ssh -q username#serverA tail -f "pathToFile"/out.log > pipeName
3.Now add inputs.tail plugin to telegraf configuration file.
[[inputs.tail]]
files = ["pipeName"]
from_beginning = false
pipe = true
data_format = "json"
name_suffix = "_myMetrics"
these are configurations you can change according to your requirement's.
4.Now run the telegraf and it will start writing the data to the output plugin you specified in your configuration file.
./usr/bin/telegraf -config demoTelegraf.conf
Related
Currently, I have a server that needs to batch process a bunch of files. All of the files are on Server A, which is running Ubuntu, but I need them to be processed on a macos server. Right now, I have a script that will transfer all the files from Server A to Server B, process all the files, then transfer all the files back to Server A.
The bash file looks like this (simplified):
script -q -c "scp -r files_to_process b:process_these 2>&1"
ssh b "process_all.sh"
script -q -c "scp -r processed_files a:final_dir 2>&1"
My question is this: is there any easy way to implement a simple queue between these servers?
Once a file has been transferred to b, I am wasting time by not just processing the files immediately.
Can anyone please let me know if there are any ways that we can monitor a directory in centOS (linux) which is present in different server and when a new file arrives in that directory I need to copy that file to my server.
One way would be to have rync command running periodically on cron job. rysnc is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending
only the differences between the source files and the existing files in
the destination.
If you want to run transfer from remote server to local server like every 10 minutes you add a line like below in your crontab. It also does few other things:
Zip the files
Transfer over ssh
Logs output
Note: You will have to do ssh key exchange to avoid the password prompt.
*/10 * * * * /usr/bin/rsync -zrvh -e ssh root#<remoteIP>:<remoteDir> >/var/log/rsync_my.log 2>&1
My code below doesn't work, when ssh to client and dump log file to the server. Please look at the code below.
ssh 192.168.0.10
dmesg >>/log.txt
You need to include the command to run on the server as part of your ssh command. You can then do the output redirection on the client side:
ssh 192.168.0.10 'dmesg' >> local_file.log
As Khanna111 mentions, this will require a password to be entered (by default), which you can avoid by setting up SSH keys for passwordless login.
How about doing ssh to the client and run the dmesg command and then rsync the logs back. Assuming you can use rsync.
You could also have a CRON that periodically run on the client that invokes dmesg and dumps the log file which can subsequently be copied over. This way you do not have to do an explict ssh.
Another option that I would prefer is to get rysnc to run the command "dmesg" before the transfer. The parameter to use is --rsync-path. The details are explained here: http://www.schwertly.com/2013/07/forcing-rsync-to-create-a-remote-path-using-rsync-path/
EDIT 1: I am assuming that in case of ssh, you have thought about password less logins and the setup they require.
I am using SSH Secure Shell client, a nice tool to connect to the server.
However, I am wondering whether is it possible to log all of coming messages from my program that I run via the SSH Secure Shell client. for example: ./test and my program will run with giving debug lines. how can I log the debug lines to a txt file for analysing?
Have you tryed?
./test > log.txt
I have a bash script on a remote host that produces a large amount of data on fd=3 as well as some possibly interesting data on stdout and stderr. I want to:
Log stdout and stderr to a file on my local machine.
Write the data on fd=3 to stdout on my local machine.
Here's how it could be done if my big script were local:
exec 3> >(cat)
./big_script.sh -o /dev/fd/3 2>&1 >big_script.log
exec 3>&-
However, I want to run big_script.sh on a remote machine and have all three pipes (fd=1, fd=2, and fd=3) come out of the ssh program as separate. What is the best way to that?
nc (netcat) and tunnels ? you can make kinda log radio on your net this way!
SSH opens just a single tty, so you just get a single stream that has all the data. You cannot tell apart what went to what the other side seen as stdout and stderr.
You could log to files on the remote host, and then ssh remote tail -f each of the log files from your local machine.