RRD print the timestamp of the last valid data - health-monitoring

I have a rdd database storing ping response from a wide range of network equipments
How can i print on the graph the timestamp of the last valid entry in the rrd database, so i can see if a host is down when did it went down
I use the folowing to creade the RRD file.
rrdtool create terminal_1.rrd -s 60 \
DS:ping:GAUGE:120:0:65535 \
RRA:AVERAGE:0.5:1:2880

Use the lastupdate option of rrdtool.
Another solution exists if you only have one file per host : don't update your RRD if the host is down. You can then see the last updated time with a plain ls or stat as in :
ls -l terminal_1.rrd
stat --format %Y terminal_1.rrd
In case you plan to use the caching daemon of RRD, you have to use the last command in order to flush the pending updates.
rrdtool last terminal_1.rrd

Related

Grep some information from log files distributed on different nodes

I need a grep command that auto logs in all the required nodes connected to that host using some for loop and display the result on the host screen without saving the file in the host. All I need to change every time is the grepping info and required nodes to grep.
This is what I tried:
for i in <Nodename>{1..5}; do
echo $i
ssh -q $i "cd <path>;grep '<string>' <filename>"
done
For example, if the node name is ca02p3zsynh001, then the log file name will be <filename>.log_ca02p3zsynh001_20171204_001316.gz. The last two fields are date time of 00 hrs 13 min and 16 sec.

SCP to transfer files from remote server which are modified after specific time

In remote server log files are rotated as shown below when the size of the active log file (file.log) is reached 100mb
delete file.log.4
file.log.3 -> file.log.4
file.log.2 -> file.log.3
file.log.1 -> file.log.2
file.log -> file.log.1
Initially all the files will be moved to local server and renamed as below
file.log_timestamp_of_log4
file.log_timestamp_of_log3
file.log_timestamp_of_log2
file.log_timestamp_of_log1
Then after only those files which are modified after the last script run time should be moved to local server.
for example next time when the script runs if file.log.1 and file.log.2 has modification time greater than the previous script rum time then only these should be moved to local server.
Can this be done using scp ?
scp is command to copy from one server to other. So if you are copying from remote to local yes you can use scp. To fetch previous modified date you can use date -r . You can save last script run time to compare. You need to use scp -p to preserve modifed date. To calculate size you can use du -h
So do something like following algo
scp -p remotepath:/filename localpath
last_mod = date -r filename
size = du -h filename
if last_mod > script_runtime
{ if size > 100 MB
{ mv filename > filename1 }
}

Linux - copying only new files from one server to another

I have a server where files are transferred thru FTP to a location. All files are there since transfer beginning (January 2015).
I want to make a new server and transfer the files from first server's location.
Basically, I need a cron job to run scp and transfer only new files since last run.
Connection between servers with ssh is working and I can transfer files without restiction between servers.
How can I achieve this in Ubuntu?
The possible duplicate with the other question doesn't stand because, on my destination server I will have just one file where I should keep the date of last cron run and the files which will be copied from first server will be parsed and deleted afterwards.
rsync will simply make sure that all files exists in both servers, correct?
I manage to set-up the cron job on remote computer using the following:
I created first a timestamp file which will keep the last timestamp when cron job run:
touch timestamp
Then I copy all files with ssh and scp commands:
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
Then I touch timestamp file with new modified time:
touch -m timestamp
The only problem with this script is that, if a file is copied to remote host during ssh run before touching timestamp second time, this file is ignored on later runs.
Later edit:
To be sure that there is no gap between timestamp file and actual run because of ssh command duration, the script was changed to:
touch timestamp_new
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
rm -rf timestamp
mv timestamp_new timestamp

RSnapshot reports errors using rsnapreport.pl: "NO STATS DATA"

I configured RSnapshot on a WD My Book Live (2TB) and its working (at least that's what the logs say). I used the reporting tool rsnapreport.pl from /usr/share/doc/rsnapshot/examples/utils/rsnapreport.pl.gz to get human readable mail reports about the crontab triggered backup jobs.
While the backup jobs seem to work, the reports are a obviously missing information as you can see in this snipplet:
SOURCE TOTAL FILES FILES TRANS TOTAL MB MB TRANS LIST GEN TIME FILE XFER TIME
--------------------------------------------------------------------------------------------------------------------
rsync://server:/vmail 13950 137 3687.81 20.31 0.052 seconds 0.000 seconds
ERRORS
/shares/rsnapshot/daily.0/ NO STATS DATA
The Question now is:
Beside the error at the bottom, which is my first and major problem and question, the FILE XFER TIME is also 0 for all backup jobs (I guess that the issues correlate).
I followed all instructions (see below) - what am I missing?
So what did I do so far:
*) The NAS runs Debian Squeeze (incl. squeeze-backports), Kernel Version is 2.6.32, PPC Architecture.
*) rsync version 3.0.3-2 (preinstalled), with /etc/rsyncd.conf:
pid file=/var/run/rsyncd.pid
lock file=/var/run/rsync.lock
log file=/var/log/rsync.log
[rsync]
path=/shares/rsync
uid=root
gid=share
read only=no
list=yes
auth users=root
*) Installed rsnapshot 1.3.1-1 with /etc/rsnapshot.conf:
config_version 1.2
snapshot_root /shares/rsnapshot/
cmd_rm /bin/rm
cmd_rsync /usr/bin/rsync
cmd_logger /usr/bin/logger
interval daily 7
interval weekly 4
interval monthly 3
verbose 3
loglevel 3
logfile /var/log/rsnapshot.log
lockfile /var/run/rsnapshot.pid
rsync_long_args --delete --numeric-ids --relative --delete-excluded --stats
backup rsync://server:/vmail/ backupOfServer/vmail/
backup ...
backup ...
backup ...
*) unpacked the report script and followed instructions in the script (most of which you can see in the config above):
# this script prints a pretty report from rsnapshot output
# in the rsnapshot.conf you must set
# verbose >= 3
# and add --stats to rsync_long_args
# then setup crontab 'rsnapshot daily 2>&1 | rsnapreport.pl | mail -s"SUBJECT" backupadm#adm.com
# don't forget the 2>&1 or your errors will be lost to stderr
*) and set up cron.d/rsnapshot:
MAILTO="user1#foo,user2#foo"
30 3 * * * root /usr/bin/rsnapshot daily 2>&1 | /root/rsnapreport.pl
0 3 * * 1 root /usr/bin/rsnapshot weekly 2>&1 | /root/rsnapreport.pl
30 2 1 * * root /usr/bin/rsnapshot monthly 2>&1 | /root/rsnapreport.pl
If you need any detailed or additional information, don't hesitate. We are happy to have daily reports of the backup at all, just the errors at the bottom are making us nervous.
Best Regards and thanks in advance,
Peter
The reason for this error was, that I did not uncomment the cmd_cp parameter. RSnapshot therefore used its build-in copy mechanism, which uses rsync.
This call of rsync was echoed to the output. The report script scans the output for calls to rsync and looks for transfer statistics, but the initial "copy" command does not produce such stats - therefore the error saying "NO STATS" for the source /daily.0
The solution is, to read the configuration file and follow the instructions:
# LINUX USERS: Be sure to uncomment "cmd_cp". This gives you extra features.
# EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
#
# See the README file or the man page for more details.
#
#cmd_cp /bin/cp
Uncommenting the last line fixes the error... RTFM ;)
The "NO STATS DATA" error is also reported if you miss out on the:
rsync_long_args --stats
The "NO STATS DATA" error is also reported if you backing up something containing "rsync" in its path like /etc/default/rsync.
For example in this case the command rsnapshot daily 2>&1 | /bu/script/rsnapreport.pl | mail -s "[BU Report]date" me#example.com will return the following errors:
Use of uninitialized value $source in hash element at
/bu/script/rsnapreport.pl line 95, <> line 3991. Use of uninitialized
value $source in hash element at /bu/script/rsnapreport.pl line 96, <>
line 3991. ...
This is due to the rsnapreport.pl script which parse stats info from the rsync outpout and from the "rsync" string in it.
To simply resolve this problem add in your /etc/rsnapshot.conf the line corresponding to the problematic rsync string found in the rsync outpout:
For example and if you don't need to backup etc/default/rsync:
exclude etc/default/rsync
If you need to backup the files with path containing "rsync" you have to modify the rsnapreport.pl script.

Using Postgres transactions in linux shell script

I'm developing a shell script that loops through a series of Postgres database table names and dumps the table data. For example:
# dump data
psql -h $SRC_IP_ADDRESS -p 5432 -U postgres -c "BEGIN;" AWARE
do
:
pg_dump -U postgres -h $IP_ADDRESS -p 5432 -t $i -a --inserts MYDB >> \
out.sql
done
psql -h $IP_ADDRESS -p 5432 -U postgres -c "COMMIT;" MYDB
I'm worried about concurrent access to the database, however. Since there is no database lock for Postgres, I tried to wrap a BEGIN and COMMIT around the loop (using psql, as shown above). This resulted in an error message from the psql command, saying that:
WARNING: there is no transaction in progress
Is there any way to achieve this? If not, what are the alternatives?
Thanks!
Your script has two main problems. The first problem is practical: a transaction is part of a specific session, so your first psql command, which just starts a transaction and then exits, has no real effect: the transaction ends when the command completes, and later commands do not share it. The second problem is conceptual: changes made in transaction X aren't seen by transaction Y until transaction X is committed, but as soon as transaction X is committed, they're immediately seen by transaction Y, even if transaction Y is still in-progress. This means that, even if your script did successfully wrap the entire dump in a single transaction, this wouldn't make any difference, because your dump could still see inconsistent results from one query to the next. (That is: it's meaningless to wrap a series of SELECTs in a transaction. A transaction is only meaningful if it contains one or more DML statements, UPDATEs or INSERTs or DELETEs.)
However, since you don't really need your shell-script to loop over your list of tables; rather, you can just give pg_dump all the table-names at once, by passing multiple -t flags:
pg_dump -U postgres -h $IP_ADDRESS -p 5432 \
-t table1 -t table2 -t table3 -a --inserts MYDB >> out.sql
and according to the documentation, pg_dump "makes consistent backups even if the database is being used concurrently", so you wouldn't need to worry about setting up a transaction even if that did help.
(By the way, the -t flag also supports a glob notation; for example, -t table* would match all tables whose names begin with table.)

Resources