How to get function name after ' } ' in ftrace/ trace-cmd - linux

I have recently started working with ftrace. I able to get function graph trace using
trace-cmd record -p function_graph -F <bash command>
trace-cmd report
With the above commands I can see time taken by function to finish before ' }
'
But I want to script for listing functions taking more time than 10ms.
Can anyone help me to achieve this?

TLDR
it can be achieved by
trace-cmd report -O fgraph:tailprint=yes
explanation
as per help
-O plugin option -O [plugin:]var[=val]
but when we try but plugin name function_graph what we use for record doesn't work
as well if we dig in ftrace docs
we have option funcgraph-tail but even this won't work
finally when you dig in source code of trace-cmd
trace-cmd/lib/trace-cmd/trace-ftrace.c
has these options.

Related

check date_filename.txt exist

I've created a set of linux command that do set of ping and keep the log file into 2 file, which is date-time_successping and date-time_failping. I want to do an action where when date-time_failping exist, the linux system will send an email to the PIC. but the filename is too random since i've a date infront of it. E.g. 20170911-160455_failping. I tried using like -c/-f, but they cannot search if no specific name. there will be multiples of file with variant of dates created. So I need some advice on this. hope anyone could help.
thanks
P/S: I'm so sorry if the information that given is not enough, please reply if needed more info so that i could try to provide and help me solve this issue.
Regards
You can do it something like this optionally,
This command is to get yesterday date:
VAR1=`date +%Y%m%d -d "yesterday"`
Or you want to work with today's date
VAR1=`date +%Y%m%d `
$file = VAR1`__failping`
if [ -f "$file" ]
then
echo "My message" | mail -s subject user#gmail.com
else
//Do something else
end
You can use inotify to monitor your log file creation event, then check if the file pattern match xxxxx_failping. Assume your log files are put in ping_logs, you can use the following script to monitor
$ inotifywait -rme create ping_logs/ |awk '{if($NF ~ /[0-9]+-[0-9]+_failping/) print $NF}'
Here just print the file name, you can change to your mailing action.

Find and Replace in bash Shell

Please advise on replacing a variable with latest date & time.
Here is my requirement.
FN='basename$0'
TS=`date '+%m/%d/%Y %T'`
QD='08/27/2014 16:25:45'
Then I have a query to run. After it has run, I need to take $TS (current system date & time) and assign it as a value to the $QD variable. This is a loop process and gets updated every time the script runs.
I've tried using sed but was not successful.
Please help.
Programatically modifying your script to have a different timestamp constant is absolutely and emphatically the wrong way to handle this problem.
Instead, when you want to mark that the query has been done, simply touch a file:
touch lastQueryCompletion
...and when you want to know when the query was last done, check that file's timestamp:
# with GNU date
QD=$(date -r lastQueryCompletion '+%m/%d/%Y %T')
# or, with Mac OS X stat
QD=$(stat -t '%Y/%m/%d %H:%M:%S' -f '%Sm' lastQueryCompletion)
Although you haven't mentioned the overall goal that you wish to accomplish, I have a feeling something like this would be more robust than using sed to update an existing script file.
FN='basename$0'
TS=date '+%m/%d/%Y %T'
# Load the latest QD (from the last run)
[ -e ~/.QD.saved ] && QD="`cat ~/.QD.saved`"
QD='08/27/2014 16:25:45'
...Later in that file...
#Save the new QD variable
echo '$(date +$FORMAT)'" > ~/.QD.saved
Although I'm not sure if sed is the tool you're looking, I believe that your command would have to go like this:
sed -i -r 's/^QD=.*/QD="$TS"/g' "$FN"
I'm assuming you're using gnu-sed, which with -i option tells to do an in-place substitution, rather then copying the input line to the pattern space.
Well, hope it helps.

Incorrect exit status of wrapper script in KSH

We have a wrapper script for Teradata TPT utility. The wrapper script is pretty straightforward but the problem is that the exit status of the wrapper is not the same as that of the utility. In many cases, the script returns 0 even if the utility fails. I have saved the exit status in a separate variable because some steps need to be done before exiting but exiting with this variable's value doesn't seem to work. Or is the utility returning status 0 even in case of some failures even if the logs clearly specify some other status?
The worse part is, this behavior is quite random, sometimes the script does fail with the exit status of the utility. I want to be sure if there is some problem with utility's exit status.
The script runs through KSH. The final part of the wrapper script is:
tbuild -f $sql.tmp -j ${id}_$JOB >$out 2>&1
ret_code=$?
cd ${TWB_ROOT}/logs
logpath=`ls -t ${TWB_ROOT}/logs/${id}_${JOB}*.out |head -1`
logpath1=${logpath##*/}
logname=${logpath1%-*}
tlogview -l ${logpath} > /edw/$GROUP/tnl/jobs/$JOB/logs/tpt_logs/${logname}.log
###Mainting 3 tpt binary log files
if [ $ret_code -eq 0 ]
then
binout=$TPTLOGDIR/${logname}.dat
binout1=$TPTLOGDIR/${logname}.dat1
binout2=$TPTLOGDIR/${logname}.dat2
[ -f $binout1 ] && mv $binout1 $binout2
[ -f $binout ] && mv $binout $binout1
mv "$logpath" "/edw/${GROUP}/tnl/jobs/$JOB/logs/tpt_logs/${logname}.dat"
fi
rm -f $sql.tmp
echo ".exit"
exit $ret_code
Thanks in advance for the help and suggestions.
The script looks ok, and should indeed return the same exit code as the tbuild utility.
It comes down to knowledge of the specific product.
I've never worked with any of these products, but Teradata has an ample User Guide for the Parallel Transporter, with an explicit Post-Job Considerations section, warning:
Even if the job completed successfully, action may still be required based on error and warning information in the job logs and error tables.
So technically, a job might complete, but results may vary from time to time.
I guess you have to define your own policies and scan the logfiles for patterns of warnings and error messages, and then generate your own exit codes for semantic failures. Tools like logstash or splunk might come in handy.
BTW, you might consider using logrotate for rotating the $TPTLOGDIR/${logname}.dat files.
Turns out that the issue was in the utility itself, as suspected. Shell script worked fine.

Read and parse perf.data

I am recording a performance counters frm linux using the command perf record.
I want to use the result perf.data as an input to other programming apps. Do you know how shall I read and parse the data in perf.data? Is there a way to transform it to .text file or .csv?
There is builtin perf.data parser and printer in perf tool of linux tools with subcommand "script".
To convert perf.data file
perf script > perf.data.txt
To convert output of perf record in other file (perf record -o filename.data) use -i option:
perf script -i filename.data > filename.data.txt
perf script is documented at man perf-script, available online at http://man7.org/linux/man-pages/man1/perf-script.1.html
perf-script - Read perf.data (created by perf record) and display
trace output
This command reads the input file and displays the trace recorded.
'perf script' to see a detailed trace of the workload that was
recorded.
perf data convert --to-json landed in April.
https://man7.org/linux/man-pages/man1/perf-data.1.html
The quipper sub-project of https://github.com/google/perf_data_converter can parse perf.data files.
An example command definition that redirects service check performance data to a text file for later processing by another application is shown below:
define command{
command_name store-service-perfdata
command_line /bin/echo -e "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$" >> /usr/local/nagios/var/service-perfdata.dat
}

Generating HTML output from criterion

There is a nice example of HTML output from criterion at http://bos.github.com/criterion/.
Which command line option is used to generate this output?
An answer to a related question asserts that this output exits, but it does not seem to show up in the command line options when using --help.
Sorry I didn't get around to your comment-question.
The answer Jedai gives is right - just use -o. For example, here is a line from one of my Makefiles for running benchmarks using defaultMain from Criterion:
./Bench -g -u Bench.csv -o Bench.html -s $(SAMPLES)
Breaking that down, it says:
-g run GC between each sample
-u output CSV data to the given file
-o output HTML data to the given file
-s collect this many samples
Well if you just want html output, then yourBench -o yourReport.html will generate some perfectly reasonable output. If you want to use your own template, look at the templates/report.tpl example in the distribution and use the -t option.
It seems to me that you just pass the template as a command line option, and then it populates it. If the template happens to be an html template, then you've generated html.
See the source here: https://github.com/bos/criterion

Resources