Log into a specific file using rsyslog functions - linux

Although this topic is discussed by other people but I could not get it done through reading explanations of other people here.
I would like to use syslog functions to log into a specific file. I can see the logged message but I could not have the logs printed into a specific file.
What I did is:
#define log_info(...) syslog(LOG_INFO, __VA_ARGS__);
First approach:
openlog("PingWatchdog", LOG_PID|LOG_CONS, LOG_USER);
log_info("[INFO]: PingWatchdog: pingDispatcher thread starting.");
closelog();
in /etc/rsyslog.d there is a config file in which I added this rule :
if:syslogtag, isequal, "PingWatchdog:" /var/log/pingwatchdog.log
&stop
second approach:
openlog("PingWatchdog", 0, LOG_LOCAL1);
log_info("[INFO]: PingWatchdog: pingDispatcher thread starting.");
closelog();
in /etc/rsyslog.d there is a config file in which I added this rule :
local1.info /var/log/pingwatchdog.log
but these two methods could not help me to write into my desired file which is: /var/log/pingwatchdog.log
my program name is PingWatchdog
I also tried this rule but not helpful:
if $programname == 'PingWatchdog' then /var/log/pingwatchdog.log
any Idea what should I do?

Add below in rsyslog conf.
if ($syslogtag contains 'PingWatchdog') then {
*.* /var/log/pingwatchdog.log.log
stop
}

Related

QFileSystemWatcher file changed signal emits only ones for few file update

I am using QFileSystemWatcher to control the log file changes.
For creating and updating the log file I am using boost library.
When I log few messages in one method file changing signal emits only ones (for last message), but I see that file updating every time after log message added.
So, the code for QFileSystemWatcher is
std::string fn = "app.log";
logging::init_log(fn);
QFileSystemWatcher* watcher = new QFileSystemWatcher();
auto success = QObject::connect(watcher, SIGNAL(fileChanged(QString)), this, SLOT(handleFileChanged(QString)));
Q_ASSERT(success);
watcher->addPath(QString::fromStdString(fn));
adding log messages
void a(){
/* some code */
logging::write_log("test error", logging::kError);
logging::write_log("test info", logging::kInfo);
}
QFileSystemWatcher emits signal only ones for Info level message.
In file manager I see that file updating after each call (test error, test info).
In log file initialization I use
sink->locked_backend()->auto_flush(true);
so the file updates immediately.
How can I fix this? Or maybe there is another approach how to handle log file updating to show message in GUI.
Similar filesystem event notifications are usually collapsed into one, unless they are consumed by a reader. For example, if the writer writes 10 bytes to a file, the thread that monitors that file for writes will typically see one event instead of 10. This is explicitly outlined in inotify description notes on Linux, which is likely used internally by QFileSystemWatcher.
This should not matter for any correct implementation of a filesystem monitoring software. The notification only allows the monitor to notice that some event happened (e.g. a write occurred), and it is up to the software to discover further details about the event (e.g. the amount of data that was written, and writing position).
If you aim to just display the written logs, you should be able to just read the file contents from the current reading position to the end of the file. That read operation may return one log record or more. It can return an incomplete log record, too, if the C++ standard library is implemented in a certain way (e.g. when auto_flush is disabled, and the stream buffer fills the internal buffer with part of the log record content before issuing write). The monitoring software should parse the read content to separate log records and detect incomplete log records (e.g. split data by newline characters).

Jenkins: extended choice parameter - groovy - how to create file on the master

i have a json string defined in the groovy script part of the 'extended choice parameter' plugin. Additionally I want to write the json config in a file on the master side inside the groovy script area. I thought, maybe the job directory would be the best place?
http://hudson/hudson/job/MY_JOB/config.json
If you ask now, why i should do this; the reason behind is, i don´t want the config pre-saved somewhere else. I don´t like the idea of configuring the file outside of the job config. I want to see/adjust configs at one place - in the job config.
I need many other informations from the json config for later use in a python code section within the same job.
My questions are:
Am i following a wrong path here? Any suggestions?
can i write directly the json config on the master side? It doesn´t have to be the jenkins job directory. I don´t care about the device/directory.
if the approach is acceptable, how can i do this?
The following code doesn´t work:
def filename = "config.json"
def targetFile = new File(filename)
if (targetFile.createNewFile()) {
println "Successfully created file $targetFile"
} else {
println "Failed to create file $targetFile"
}
Remark:
hudson.FilePath looks interesting!
http://javadoc.jenkins-ci.org/hudson/FilePath.html
Thanks for your help, Simon
I got it:
import groovy.json.*
// location on the master : /srv/raid1/hudson/jobs
jsonConfigFile = new File("/srv/raid1/hudson/jobs/MY_JOB/config.json")
jsonConfigFileOnMaster = new hudson.FilePath( jsonConfigFile )
if( jsonConfigFileOnMaster.exists() ){
jsonConfigFileOnMaster.delete()
}
jsonConfigFileOnMaster.touch( System.nanoTime())
jsonFormatted = JsonOutput.toJson( localJsonString )
jsonConfigFile.write jsonFormatted

Error: ENOENT with Bunyan rotating-file logging (NodeJS)

I am using the Bunyan module for NodeJS logging. When I try using the rotating-file type, it makes my app crash every time and outputs this error:
Error: ENOENT, rename 'logs/info.log.3'
However, it never happens at the same time so I can't find any logic...
This is how I instanciate my logger:
var log = Bunyan.createLogger(config.log.config);
log.info('App started, ' + process.env.NODE_ENV);
And here is my config.json:
{
"name" : "app",
"streams" : [
{
"type" : "rotating-file",
"period": "5000ms", //Low period is for testing purposes
"count" : 12,
"level" : "info",
"path" : "logs/info.log"
},
{
"type" : "rotating-file",
"period": "5000ms",
"count" : 12,
"level" : "error",
"path" : "logs/error.log"
},
{
"type" : "rotating-file",
"period": "5000ms",
"count" : 12,
"level" : "trace",
"path" : "logs/trace.log"
}
]
}
Can anyone advise how to fix my issue? Thanks in advance.
What I have just done (last night actually) to get around this problem of a master + workers contending over a Bunyan rotating-file is to have the workers write "raw" log records to a stream-like object I created called a WorkerStream. The write method of the WorkerStream simply calls process.send to use IPC to deliver the log record to the master. The master uses a different logger config that points to a rotating-file. The master uses the code shown below to listen for log records from its workers and write them to the log file. So far it appears to be working perfectly.
cluster.on('online', function (worker) {
// New worker has come online.
worker.on('message', function (msg) {
/* Watch for log records from this worker and write them
to the real rotating log file.
*/
if (msg.level) {
log._emit(msg);
}
});
});
ln is your friend.
Existing logging libraries have rotation problem with cluster module. Why doesn't ln have this issue?
Both bunyan and log4js rename the log file on rotation. The disaster happens on file renaming under cluster environment because of double files renaming.
bunyan suggests using the process id as a part of the filename to tackle this issue. However, this will generate too many files.
log4js provides a multiprocess appender and lets master log everything. However, this must have the bottleneck issue.
To solve this, I just use fs.createWriteStream(name, {"flags": "a"}) to create a formatted log file at the beginning instead of fs.rename at the end. I tested this approach with millisecond rotation under cluster environment and no disasters occurred.
I have experienced the same issue without using clustering. I believe the problem is being caused by old files sitting in the log directory. While the main logger can open and append to existing files, the file rotation logic uses rename, which files when it steps on an existing file. (e.g. an existing info.log.3 file).
I'm still digging into the source to figure out what needs to be changed to recover from the rolling error.
One additional thought as I review the source. If you have multiple Bunyan log instances that use the same log file (in my case, a common error.log), the rename calls could be happening nearly concurrently from the OS level (asynchronous and separate calls from a Node.js perspective, but concurrently from the OS perspective).
It's sadly not possible to use multiple rotating file streams against the same file.
If you're in the same process, you must use a single logger object - make sure you're not creating multiple of them.
If you're working across processes, you must log to different files. Unfortunately there's nothing yet that has the IPC in place to allow different rotators to coordinate amongst themselves.
I have a plugin rotating file stream that detects if you try to create 2 rotators against the same file in the a single process and throws an error.
It can't help in the case of multiple processes tho.
bunyan-rotating-file-stream
From my experience, it happens sometimes when the logs directory (or whatever you named it) does not exist.
If you are running through this error in a automation pipeline, for example, you may be ignoring all the files in logs and committing it empty, then it is not created when the repository is cloned by the pipeline.
Simply make sure that logs is created by placing a .gitkeep file inside it (or any other trick).
This may be the case of many of you who come across this question.

Have puppet ensure a service is running only when not in a maintenance mode

I have a basic service check in a puppet manifest I want running most of the time with just
service{ $service_name :
ensure => "running",
enable => "true",
}
The thing is there are periods of maintenance I would like to ensure puppet doesn't come along and try to start it back up.
I was thinking creating a file "no_service_start" in a specified path and do a 'creates' check like you could do with a guard for exec but it doesn't look like that's available for the service type.
My next thought was to have the actual service init script do the check for this file itself and just die early if that guard file exists.
While this works in that it prevents a service from starting it manifests itself as a big red error in puppet (as expected). Given the service not starting is a desired outcome if that file is in place I'd rather not have an error message present and have to spend time thinking about if it's "legit" or not.
Is there a more "puppet" way this should be implemented though?
Define a fact for when maintenance is happening.
Then put the service definition in an if block based off that fact.
if !maintenance
{
service{ $service_name :
ensure => "running",
enable => "true",
}
}
Then when puppet compiles the catalog if maintenance == true the service will not be managed and stay in whatever state it currently happens to be.
I don't really like this answer, but to work around puppet spitting out errors when bailing b/c of a guard file is to have to init script that's doing that check bail with an exit code as 0.
How about putting the check outside? You could do something similar to this: https://stackoverflow.com/a/20552751/1097483, except with your service check inside the if loop.
As xiankai said, you can do this on the puppetmaster. If you have a script that returns running or stopped as a string, depending on the current time or anything, you can write something like:
service{ $service_name :
ensure => generate('/usr/local/bin/maintenanceScript.sh');
}

How do I write log messages in Kohana 3.2?

Ok I've tried searching all over but can't seem to get just a simple straight forward answer.
I want to write log messages (INFO, ERROR, etc.) to the Kohana log file /application/logs/YYYY/MM/DD.php.
How do I do it?
Try the log class add() method: http://kohanaframework.org/3.2/guide/api/Log#add
Call it like this:
Log::instance()->add(Log::NOTICE, 'My Logged Message Here');
For the first parameter (level) use one of the 9 constants defined in the log class
Shuadoc you shouldn't touch system files (all those under system folder).
Change the value in bootstrap.php instead as stated by Ygam
Otherwise, when updates come you'll be in trouble.

Resources