Is there a way on a RHEL system to trace back where a syslog() call comes from?
I am getting some weird generic message on my syslog and I have no idea where it is coming from.
Can I modify the main syslog function to output the stack trace when it encounters the log line I am looking for?
How would I go about this?
Related
TLDR; Does ftrace have a function similiar to trace_printk() which can print out the current function's call stack?
I know how to print out the function or function graph trace using ftrace, but sometimes I want to check one place's function call trace without touching others, that's to say, keep the other place's tracer as nop so the trace_printk() function will only output one line log and the place I'm curious of as function tracer, so I can go deep in what has happened in that place, it that feasible?
I'm currently doing some experiments, and I need to record all the events that are generated during the execution of normal stress-ng execution cyle like this /usr/bin/stress-ng -c 80 -t 30 --times --exec 50 --exec-ops 50, specifically the ones related to exec (sched:sched_process_exec and syscalls:sys_enter_execve).
Unfortunately when analysing the trace file, I get some processes that didn't generated any sys_execve, but were captured by the sched_process_exec, which to me makes no sense.
This happened even though no events where lost (in the trace file the entries in buffer/written are the same, and trace-cmd doesn't warn about events lost).
Given this situation I can't understand why this happens, and the only explanation I can give is that these events are not being recorded. Any help would be appreciated.
Here's an example for reference of trace file I get
To be clear in what I'm saying, these lines should be the norm:
stress-ng-1748 [001] .... 19573.548553: sys_execve(filename: 7ffe7a791720, argv: 7ffe7a791700, envp: 7ffe7a7916f8)
stress-ng-1748 [001] .... 19573.548707: sched_process_exec: filename=/usr/bin/stress-ng pid=1748 old_pid=1748
A process which generated both the sys_execve event and the sched_process_exec event.
Whereas this one:
stress-ng-1780 [005] .... 19573.598398: sched_process_exec: filename=/usr/bin/stress-ng pid=1780 old_pid=1780
which is the last one of the file in the link, is an example of process without a sys_execve event associated.
Bonus question: I'd also need to record the equivalent fork event (namely syscalls:sys_enter_fork) with a stress execution with fork-ops (or something equivalent), but I haven't been able to do so, neither from trace-cmd, nor manually from Ftrace. I've read around the internet that there are some special cases when dealing with forking processes, but couldn't understand what to do in order to record this event in particular.
Any help on this matter would be appreciated as well.
I solved this problem by also capturing the event syscalls:sys_enter_execve. Between the two of them I was able to get every instance of exec called.
I have a python subprocess that runs an arbitrary C++ program (student assignments if it matters) via POpen. The structure is such that i write a series of inputs to stdin, at the end i read all of stdout and parse for responses to each output.
Of course given that these are student assignments, they may crash after certain inputs. What i require is to know after which specific input their program crashed.
So far i know that when a runtime exception is thrown in the C++ program, its printed to stderr. So right not i can read the stderr after the fact and see that it did in face crash. But i haven't found a way to read stderr while the program is still running, so that i can infer that the error is in response to the latest input. Every SO question or article that i have run into seems to make use of subprocess.communicate(), but communicate seems to block until the subprocess returns, this hasn't been working for me because i need to continue sending inputs to the program after the fact if it hasn't crashed.
What i require is to know after which specific input their program crashed.
Call process.stdin.flush() after process.stdin.write(b'your input'). If the process is already dead then either .write() or .flush() will raise an exception (specific exception may depend on the system e.g, BrokenPipeError on POSIX).
Unrelated: If you are redirecting all three standard streams (stdin=PIPE, stdout=PIPE, stderr=PIPE) then make sure to consume stdout, stderr pipes concurrently while you are writing the input otherwise the child process may hang if it generates enough output to fill the OS pipe buffer. You could use threads, async. IO to do it -- code examples.
I am using node.js and want to handle error messages.
What are the differences between erro, stderr, stdout?
When scripting shell, I redirected stderr and found useful error message and it solved the problem.
I am not clear about the concept of what kind of outputs computer have either. Can anyone explain in a comprehensive way?
Thanks.
It is actually an interesting question. You would probably get more answers if you format the title of your question like this -- Node JS difference between error, stderr, and stdout.
I won't repeat the difference between stdout and stderr, as it is answered previously.
However, the difference between error and stderr is not that easily distinguished.
Error is an error object created by Node JS because it is having a problem executing your command. See more here
Stderr is a standard output stream that happens because something is wrong during execution -- that is Node JS has no trouble executing your command, it is your command itself throws the error.
Let me know if this is clear, otherwise, I'm happy to throw in an example:)
stderr and stdout are streams. Writing to console will log both streams. Apparently the distinction exists between them so we that if we want to (for example) redirect certain data elsewhere, we have the ability to be selective.
You may find the following article helpful.
http://www.jstorimer.com/blogs/workingwithcode/7766119-when-to-use-stderr-instead-of-stdout
I have long wondered why log4j defaults to outputting an error message when there is no log4j.properties. A reasonable default to stdout or stderr would make more sense. Is there a FAQ or a discussion about this somewhere that indicates the reasoning behind this decision? I have always considered that to be the only thing about log4j that is worse than other logging alternatives.
Let's say that you use log4j in a program where fds 0, 1, and 2 are either closed (which would cause write(2) to fail with EBADF) or redirected to /dev/null. Then log4j tries to output a log message. What would happen in this situation? You'd have a silent failure, which is something you'd want to avoid having happen in a logging library at all costs.