Making my module's printk's print to my own logfile - linux

I'm doing some Linux module programming. I typically printk little error messages and stuff for debugging - I then exit out of my module and use "dmesg" to see what's up.
That method of debugging is no longer sufficient. I would like to pipe my "printk" text into my own logfile - preferably local, but I understand if that's impossible and I need to put it somewhere like var/log/*.log.
I've looked into editing syslog.conf - but I'm not sure what to do there. I want just my module's printk's in its own files. Is there a simple way to do this that my Google-fu cannot catch?

You'll need to start every printk's with the module's unique token:
printk("MyModule: ....", ....);
Use syslog-ng (for example) match rule to catch all the module's output
Use `tail -f /var/log/messages | grep MyModule to see the live kernel's output

Related

Where is the standard output and error output being redirected by mongodb-mms-automation agent?

Sorry for my noob question as I am very new to linux. Please consider the below linux command :
/opt/mongodb-mms-automation/bin/mongodb-mms-automation-agent
-f /etc/mongodb-mms/automation-agent.config
-pidfilepath /var/run/mongodb-mms-automation/mongodb-mms-automation-agent.pid
>> /var/log/mongodb-mms-automation/automation-agent-fatal.log 2>&1
According to my understanding >> redirects standard output to file and 2>&1 means that standard error will be redirected to the same location as standard output. So in the above case I expect the standard output and standard error both to be redirected to /var/log/mongodb-mms-automation/automation-agent-fatal.log.
But obviously this is not the case. I can see that all info / error messages are being redirected to a file /var/log/mongodb-mms-automation/automation-agent.log. Can someone please explain what error I am making in reading this command?
Regards,
Meena
Standard output and standard error are just default destinations; the program could be doing a number of things which will sabotage any attempts to save the logs by redirecting to a file:
It writes straight to the terminal output, such as /dev/pts/0.
It detects whether standard output/error are connected to a file or a terminal, and changes behaviour accordingly.
Anything else the application developer considered to be the most useful behaviour.
In other words, it's application specific. You're probably better off finding the logfile configuration setting and changing that if you really need to. Usually I find it's easier and safer to leave the defaults (since they may be handy for example for security reasons such as sandboxing) and instead pointing to the default location in whatever software is trying to process that file in some way.

A way to output logs without using process.stdout.write in Node JS?

So I'm trying to pipe two Node based js scripts together, which works as expected doing something like this.
How to pipe Node.js scripts together using the Unix | pipe (on the command line)?
essentially
$ ./a.js | ./b.js
The pipe works fine as long as the only thing output to the next script is valid JSON (for example). But I would like to see some debug logs in the first script (ideally without using the popular debug module). Funny enough, I know the debug module will do this without sending unwanted data to the pipe. How does it do that? I'd rather not dig into their code to see (lazy).
Seems like console.log, and console.error both use process.stdout/err so if I log something out, I end up mucking up the pipe.
Difference between "process.stdout.write" and "console.log" in node.js?
Is there a way to use a different tty socket or something? No idea where to start.
Looks like debug module on npm writes to stderr instead:
By default debug will log to stderr
/**
* Invokes `util.format()` with the specified arguments and writes to stderr.
*/
function log(...args) {
return process.stderr.write(util.format(...args) + '\n');
}

How does DIG utility work in FreeBSD and BIND?

I want to know how does the DIG (Domain Information Groper) command really works when it comes to code and implementation. I mean when we enter a DIG command, which part of the code in FreeBSD or BIND hits first.
Currently, I see that when I hit the DIG command, I see the control going to a file client.c. Inside this file, following function is called:
static void
client_request(isc_task_t *task, isc_event_t *event);
But how does the control reach to this place is still a big mystery for me even after digging a lot into 'named' part of the BIND code.
Further, I see this function being called from two places within this file. I tried to put logs into such places to know if control reaches to this place through those paths, but unfortunately that doesn't happen. It seems "Client_request()" function is somehow being called from outside somewhere that I am not able to figure out.
Is there anybody here who can help me out to resolve this mystery for me ?
Thanks.
Not only for bind but to any other command, within FreeBSD you could use ktrace, it is very verbose but could help you to get a quick overview of how the program is behaving.
For example, in latest FreeBSD's you have drill command instead of dig so if you would like to know what is happening behind scenes when you run the command, you could give a try to:
# ktrace drill freebsd.org
Then to disable tracing:
# ktrace -C
Once tracing is enabled on a process, trace data will be logged until
either the process exits or the trace point is cleared. A traced process
can generate enormous amounts of log data quickly; It is strongly
suggested that users memorize how to disable tracing before attempting to
trace a process.
After running ktrace drill freebsd.org a file ktrace.out should be created the one you could read with kdump, for example:
# kdump -f ktrace.out | less
That will hopefully "reveal the mystery", in your case, just replace drill with dig and then use something like:
# ktrace dig freebsd.org
Thanks to FreeBSD Ports system you can compile your own BIND with debugging enabled. To do so run
cd /usr/ports/dns/bind913/ && make install clean WITH_DEBUG=1
Then you can run it inside debugger (lldb /usr/local/bin/dig), break on the line you are interested in and then look at backtrace to figure out how the control reached there.

About the /proc file system

I am using a command in the proc file system which is the following
echo 0 > /proc/sys/net/ipv4/ip_forward
Note: I don't want to know the basic of the command written above, I want what all happens when it goes inside the kernel. As, I want to implement one of the /proc file.
Now if I want to trace the code right from when the 0 is echoed in the file-system then how to go about it. I mean if I want to trace what happens when I do this.
I want to see where in the kernel code this 0 is accepted and in which value does it get stored inorder to make the changes. Please, can somebody tell what all happens when you call this command. I want in detail explain. I don't want the description of the command.
Any related article on how it changes the kernel parameters is also fine.
I have read this but, not explained there. http://www.linuxjournal.com/article/8381
Thanks
search through linux tree (especially network stack) for create_proc_entry function. Figure out what file creates ip_forward (it must be in ip4v drivers) from name passed to create_proc_entry.
When you find the file, look at where proc_dir_entry structure is created and what functions are assigned to its read_proc, write_proc members.

Catching a direct redirect to /dev/tty

I'm working on an application controller for a program that is spitting text directly to /dev/tty.
This is a production application controller that must be able to catch all text going to the terminal. Generally, this isn't a problem. We simply redirect stdout and stderr. This particular application is making direct calls to echo and redirecting the result to /dev/tty (echo "some text" > /dev/tty). Redirects via my application controller are failing to catch the text.
I do have the source for this application, but am not in a position to modify it, nor is it being maintained anymore. Any ideas on how to catch and/or throw away the output?
screen -D -m yourEvilProgram
should work. Much time passed sinced I used it, but if you need to read some of its output it could even be possible that you could utilize some sockets to read it.
[Added: two links, Rackaid and Pixelbeat, and the home page at GNU]
The classic solution to controlling an application like this is Expect, which sets up pseudo-terminals, does logging, and drives the controlled application from a script. It comes with lots of sample scripts so you can probably just adapt one to fit your needs.
This is what I did in python
import pty, os
pid, fd = pty.fork()
if pid == 0: # In the child process execute another command
os.execv('./my-progr', [''])
print "Execv never returns :-)"
else:
while True:
try:
print os.read(fd,65536),
except OSError:
break
I can't quite determine whether the screen program mentioned by #flolo will do what you need or not. It may, but I'm not sure whether there is a logging facility built in, which appears to be what you need.
There probably is a program out there already to do what you need. I'd nominate sudosh as a possibility.
If you end up needing to write your own, you'll probably need to use a pseudo-tty (pty) and have your application controller sit in between the user's real terminal connection and the the pty device, where it can log whatever you need it to log. That's not trivial. You can find information about this in Rochkind's "Advanced UNIX Programming, 2nd Edn" book, and no doubt other similar books (Stevens' "Advanced Programming in the UNIX Environment" book is a likely candidate, but I don't have a copy to verify that).

Resources