Linux log file wildcard in SCOM - linux

I want to monitor a log file that's on Linux on SCOM.
The log is from postgresql. The path is /var/lib/pgsql/9.4/data/pg_log/postgresql-2017-08-21.log.
The thing is, that if I put that string it works, but I can't seem to make it work with wildcards for the dates.
I already tried /var/lib/pgsql/9.4/data/pg_log/postresql-%Y-%m-%d.log but no luck.
Any thoughts?

SCOM does not provide the same log file monitoring capabilities on Linux than on Windows. On Linux it can only monitor one file from a monitoring template, there is no support for file patterns. I can recommend using one of the following ways to workaround this limitation (in increasing complexity order):
Make the application log to one file (by disabling log file rotation, or by using a script that appends the log into a separate file for SCOM monitoring)
Transfer the log files to a Windows server which has SCOM Agent installed and monitor the files from there. Don't forget that the files need to be converted from UNIX line endings (\n) to DOS/Windows line endings (\r\n).
Develop some script-based (i.e.: Python) solution that follows the log file rotation. While this can cover all future requirements (i.e.: alerting for more patterns in the logs), it adds a lot of complexity to the system

you can try :
/var/lib/pgsql/9.4/data/pg_log/postresql-$(date +"%Y-%m-%d").log

Related

Where are apps expected to place log files on Linux?

I am creating a cross-platform app and would like to write some information to a log file. I know that I can dump it on the desktop or any number of unexpected places, but I am interested in placing it in the place where each operating system recommends. To complicate matters, systemd may be present and change expectations based on the platform.
Where is the expected place for Linux? And are there any other expectations I should be aware of (like if I need to put it in a folder with my company or app name)?

Continuously monitor a directory in linux and notify when a new file is available

I'm a starter in Linux and scripting environment. My requirement is like this:
From an asp.net application, a file will be generated and copied
to a predefined folder on a Linux server machine. (I'm assuming this
can be done by remote file sharing using samba server)
A service or script or whatever should be there in Linux machine
to track continuously whether the file is available.
Once a new file is available, just parse the file, extract some
input variables and execute a shell script based on these
parameters.
My question lies in point no:2. --> How can I write a service or script which should execute continuously and monitor whether a file is available in a particular folder?
I've searched a lot, got into a lot of links and I'm confused what is the easiest method to do this. Because I don't want to spend a lot of coding here as the script to be executed further and the asp.net app is more important and this should be a connector in between.
You are looking for something like inotify.
[cnicutar#ariel ~]$ inotifywait -m -e create ~/somedir/
Setting up watches.
Watches established.
/home/cnicutar/somedir/ CREATE somefile
For example, you could do it in a loop:
inotifywait -m -e create ~/somedir/ | while read line
do
echo $line
done
inotify is the perfect solution to your problem. It is available as a system command that can be used in a shell, or as a system call that can be used in a C/C++ program. For details see the accepted answer to this question: Inotify - how to use it? - linux
Update: you need inotify-tools for use on command line. The answer to the question above only describes C/C++ system calls. Here is the link to inotify-tools. It is also available as a packaged distribution so search your favorite install repository (yum/apt-get/rpm etc.): https://github.com/rvoicilas/inotify-tools/wiki

Printer config file in *nix

Are the settings or configuration specifics of a printer on a *nix system using CUPS stored in a file? My assumption is yes, as *nix systems seem to use files for everything as opposed to using a registry system as does Windows. If so, where are such files located? Are they capable of having their file permissions modified, and if so, what could cause such a thing to occur in a non-manual way?
This question relates to one of my other questions in helping to explore a single, individual theory toward an answer there, but is decidedly separate.
Check on /etc/cups, for printers the file is printers.conf.
They can have permissions modified since they usually belong to the lp group, not a single user. Check cron jobs, system updates and any other cups interface that your distribution provides.

syslog: does it remove the old logs if there would be less space on the storage

I am using syslog on an embedded Linux device (Debian-arm) that has a relatively smaller storage (~100 MB storage). If we assume the system will be up for 30 years and it logs all possible activities, would there be a case that the syslog fills up the storage memory? If it is the case, is syslog intelligent enough to remove old logs as there would be less space on the storage medium?
It completely depends how much stuff gets logged, but if you only have ~100MB, I would imagine that it's certainly likely that your storage will fill up before 30 years!
You didn't say which syslog server you're using. If you're on an embedded device you might be using the BusyBox syslogd, or you may be using the regular syslogd, or you may be using rsyslog. But in general, no syslog server rotates log files all by itself. They all depend on external scripts run from cron to do it. So you should make sure you have such scripts installed.
In non-embedded systems the log rotation functionality is often provided by a software package called logrotate, which is quite elaborate and has configuration files to say how and when which log files should be rotated. In embedded systems there is no standard at all. The most common configuration (especially when using BusyBox) is that logs are not written to disk at all, only to a memory ring buffer. The next most common configuration is idiosyncratic ad-hoc scripts built and installed by the embedded system integrator. So you just have to scan the crontabs and see if you can see anything that's configured to be invokes that looks like a log rotater.

Spring Integration - Reading files from multiple locations & putting them at a central repository

I need to transfer the file contents from multiple servers to a central repository as soon as there is a change in the file. Also the requirement is that only the changed contents should be transferred, not the whole file.
Could someone let me know if it is possible using Spring-Integration File Inbound/Outbound Adapters.
The file adapters only work on local files (but they can work if you can mount the remote filesystems).
The current file adapters do not support transferring parts of tiles, but we are working on File Tailing Adapters which should be in the code base soon. These will only work for text files, though (and only if you can mount the remote file system). For Windows (and other platforms that don't have a tail command, there's an Apache commons Tailer implementation but, again, it will only work for text files, and if you can mount the shares.
If you can't mount the remote files, or they are binary, there's no out of the box solution, but if you come up with a custom solution to transfer the data (e.g. google tailing remote files), its easy to then hook it into a Spring Integration flow to write the output.

Resources