We do profiling kernel modules with Oprofile and there is a warning in the opreport as follow
warning: could not check that the binary file /lib/modules/2.6.32-191.el6.x86_64/kernel/fs/ext4/ext4.ko has not been modified since the profile was taken. Results may be inaccurate.
1591 samples % symbol name
1592 1622 9.8381 ext4_iget
1593 1591 9.6500 ext4_find_entry
1594 1231 7.4665 __ext4_get_inode_loc
1595 783 4.7492 ext4_ext_get_blocks
1596 752 4.5612 ext4_check_dir_entry
1597 644 3.9061 ext4_mark_iloc_dirty
1598 583 3.5361 ext4_get_blocks
1599 583 3.5361 ext4_xattr_get
May anyone please explain what is the warning, does it impact the accuracy of the oprofile output and is there anyway to avoid this warning?
Any suggestions are appreciated. Thank a lot!
Add more information:
In daemon/opd_mangling.c
if (!sf->kernel)
binary = find_cookie(sf->cookie);
else
binary = sf->kernel->name;
...
fill_header(odb_get_data(file), counter,
sf->anon ? sf->anon->start : 0, last_start,
!!sf->kernel, last ? !!last->kernel : 0,
spu_profile, sf->embedded_offset,
binary ? op_get_mtime(binary) : 0);
For kernel module file, the sf->kernel->name is the kernel module name, so the fill header will always fill mtime with 0 and generate the unwanted warning
This failure indicates that a stat of the file in question failed. Do an strace -e stat to see the specific failure mode.
time_t op_get_mtime(char const * file)
{
struct stat st;
if (stat(file, &st))
return 0;
return st.st_mtime;
}
...
if (!header.mtime) {
// FIXME: header.mtime for JIT sample files is 0. The problem could be that
// in opd_mangling.c:opd_open_sample_file() the call of fill_header()
// think that the JIT sample file is not a binary file.
if (is_jit_sample(file)) {
cverb << vbfd << "warning: could not check that the binary file "
<< file << " has not been modified since "
"the profile was taken. Results may be inaccurate.\n";
does it impact the accuracy of the oprofile output and is there anyway to avoid this warning?
Yes, it impacts the output in that it has no opportunity to warn you whether "the last modified time of the binary file does not match that of the sample file...". As long as you're certain what you measured then matches which binary is installed now, the warning you're seeing is harmless.
Related
I am learning SystemTap and I have created a simple C program to grasp the basics.
When I run the program and a probe in the hosting system, the probe works flawlessly, but when I copy the exact same process in a container I run into some problems (container is running in privileged mode).
When I leave both the program and probe running for a few seconds a WARNING message appears and this is what the stap output looks like after I stop the program:
[root#client ~]# stap -v tmp-probe.stp /root/tmp
Pass 1: parsed user script and 482 library scripts using 115544virt/94804res/16320shr/78424data kb, in 140usr/20sys/168real ms.
Pass 2: analyzed script: 2 probes, 1 function, 0 embeds, 1 global using 116996virt/97852res/17612shr/79876data kb, in 10usr/0sys/6real ms.
Pass 3: using cached /root/.systemtap/cache/73/stap_7357b5a96975af17d2210a04bada9b6a_1260.c
Pass 4: using cached /root/.systemtap/cache/73/stap_7357b5a96975af17d2210a04bada9b6a_1260.ko
Pass 5: starting run.
WARNING: probe process("/root/tmp").statement(0x40113a) at inode-offset 12761464:0000000060a7c1a2 registration error [man warning::pass5] (rc -5)
0
Pass 5: run completed in 0usr/70sys/3579real ms.
The binary has been compiled with: gcc -ggdb3 -O0 tmp.c -o tmp and contains these probe points:
process("/root/tmp").begin $syscall:long $arg1:long $arg2:long $arg3:long $arg4:long $arg5:long $arg6:long
process("/root/tmp").end $syscall:long $arg1:long $arg2:long $arg3:long $arg4:long $arg5:long $arg6:long
process("/root/tmp").plt("puts")
process("/root/tmp").plt("sleep")
process("/root/tmp").syscall $syscall:long $arg1:long $arg2:long $arg3:long $arg4:long $arg5:long $arg6:long
process("/root/tmp").mark("in_test")
process("/root/tmp").function("main#/root/tmp.c:11")
process("/root/tmp").function("test#/root/tmp.c:5")
tmp.c:
#include <stdio.h>
#include <sys/sdt.h>
#include <unistd.h>
void test() {
STAP_PROBE(test, in_test);
printf("here\n");
sleep(1);
}
int main() {
while (1) {
test();
}
}
tmp-probe.stp:
global cnt
probe process(#1).mark("in_test") {
cnt++
}
probe process(#1).end {
printf("%ld\n", cnt)
exit()
}
A more verbose stap output:
[root#client ~]# stap -vv tmp-probe.stp /root/tmp
Systemtap translator/driver (version 4.6/0.186, rpm 4.6-4.fc35)
Copyright (C) 2005-2021 Red Hat, Inc. and others
This is free software; see the source for copying conditions.
tested kernel versions: 2.6.32 ... 5.15.0-rc7
enabled features: AVAHI BOOST_STRING_REF DYNINST BPF JAVA PYTHON3 LIBRPM LIBSQLITE3 LIBVIRT LIBXML2 NLS NSS READLINE MONITOR_LIBS
Created temporary directory "/tmp/stapRsSKyO"
Session arch: x86_64 release: 5.16.20-200.fc35.x86_64
Build tree: "/lib/modules/5.16.20-200.fc35.x86_64/build"
Searched for library macro files: "/usr/share/systemtap/tapset/linux", found: 7, processed: 7
Searched for library macro files: "/usr/share/systemtap/tapset", found: 11, processed: 11
Searched: "/usr/share/systemtap/tapset/linux/x86_64", found: 20, processed: 20
Searched: "/usr/share/systemtap/tapset/linux", found: 407, processed: 407
Searched: "/usr/share/systemtap/tapset/x86_64", found: 1, processed: 1
Searched: "/usr/share/systemtap/tapset", found: 36, processed: 36
Pass 1: parsed user script and 482 library scripts using 115580virt/95008res/16520shr/78460data kb, in 130usr/30sys/163real ms.
derive-probes (location #0): process("/root/tmp").mark("in_test") of keyword at tmp-probe.stp:3:1
derive-probes (location #0): process("/root/tmp").end of keyword at tmp-probe.stp:7:1
Pass 2: analyzed script: 2 probes, 1 function, 0 embeds, 1 global using 117032virt/97860res/17624shr/79912data kb, in 10usr/0sys/6real ms.
Pass 3: using cached /root/.systemtap/cache/73/stap_7357b5a96975af17d2210a04bada9b6a_1260.c
Pass 4: using cached /root/.systemtap/cache/73/stap_7357b5a96975af17d2210a04bada9b6a_1260.ko
Pass 5: starting run.
Running /usr/bin/staprun -v -R /tmp/stapRsSKyO/stap_7357b5a96975af17d2210a04bada9b6a_1260.ko
staprun:insert_module:191 Module stap_7357b5a96975af17d2210a04bada9b_698092 inserted from file /tmp/stapRsSKyO/stap_7357b5a96975af17d2210a04bada9b6a_1260.ko
WARNING: probe process("/root/tmp").statement(0x40113a) at inode-offset 12761464:0000000060a7c1a2 registration error [man warning::pass5] (rc -5)
0
stapio:cleanup_and_exit:352 detach=0
stapio:cleanup_and_exit:369 closing control channel
staprun:remove_module:292 Module stap_7357b5a96975af17d2210a04bada9b_698092 removed.
Spawn waitpid result (0x0): 0
Pass 5: run completed in 10usr/70sys/2428real ms.
Running rm -rf /tmp/stapRsSKyO
Spawn waitpid result (0x0): 0
Removed temporary directory "/tmp/stapRsSKyO"
I'm working on multiple well-formed xml files, whose sizes range from 100 MB to 4 GB. My goal is to read them as strings and then import them as ElementTree objects using .fromstring() method (from xml.etree.ElementTree module).
However, as the process goes through and the string size increases, two exceptions occured related to memory restriction :
xml.etree.ElementTree.ParseError: out of memory: line 1, column 0
OverflowError: size does not fit in an int
It looks like .fromstring() method enforces a string size limit to the input, around 1GB... ?
To debug this, I wrote a short script using a for loop:
xmlFiles_list = [path1, path2, ...]
for fp in xmlFiles_list:
xml_fo = open(fp, mode='r', encoding="utf-8")
xml_asStr = xml_fo.read()
xml_fo.close()
print(len(xml_asStr.encode("utf-8")) / 10**9) # display string size in GB
try:
etree = cElementTree.fromstring(xml_asStr)
print(".fromstring() success!\n")
except Exception as e:
print(f"Error :{type(e)} {str(e)}\n")
continue
The ouput is as following :
0.895206753
.fromstring() success!
1.220224531
Error :<class 'xml.etree.ElementTree.ParseError'> out of memory: line 1, column 0
1.328233473
Erreur :<class 'xml.etree.ElementTree.ParseError'> out of memory: line 1, column 0
2.567867904
Error :<class 'OverflowError'> size does not fit in an int
4.080672538
Error :<class 'OverflowError'> size does not fit in an int
I found multiple workarounds to avoid this issue : .parse() method or lxml module for bette performance. I just hope someone could shed some light on this :
Is there a specific string size limit in xml.etree.ET module and .fromstring() method ?
Why do I end up with two different exceptions as the string size increases ? Are they related to the same memory-allocation restriction ?
Python version/system: 3.9 (64 bits)
RAM : 32go
Hope my topic is clear enough, I'm new on stackoverflow
I am experiencing a peculiar "info" message from GNAT 7.4.0 (running on an "Ubuntu 19.04" system) while in the early stages of developing a QR-code generator.
I'm using some fairly aggressive compilation switches:
gnatmake -gnata -gnateE -gnateF -gnatf -gnato -gnatv -gnatVa -gnaty -gnatwe -gnatw.e main.adb
My code does build without errors, but this info message does suggest that I'm not providing a body for the package "qr_symbol".
qr_symbol.ads
with QR_Versions; use QR_Versions;
generic
Ver : QR_Version;
package QR_Symbol is
procedure Export_As_SVG;
private
type Module_State is (
Uncommitted,
One,
Zero
);
type Module_Family is (
Uncommitted,
Finder,
Separator,
Alignment,
Timing,
Format_Spec,
Version_Spec,
Data_Codeword,
EC_Codeword,
Padding
);
type Module is
record
State : Module_State := Uncommitted;
Family : Module_Family := Uncommitted;
end record;
type Module_Matrix is array (
Positive range <>,
Positive range <>
) of Module;
end QR_Symbol;
qr_symbol.adb
with Ada.Text_IO; use Ada.Text_IO;
package body QR_Symbol is
Version : constant QR_Version := Ver; -- Ver is a formal generic parameter
Side_Length : constant Positive := 17 + (Positive (Ver) * 4);
Matrix : Module_Matrix (1 .. Side_Length, 1 .. Side_Length);
procedure Export_As_SVG is
begin
Put_Line ("in Export_As_SVG()...");
Put_Line (" Version: " & Version'Image);
Put_Line (" Side_Length: " & Side_Length'Image);
-- Matrix (1, 1).State := One;
Put_Line (" Matrix (1, 1).State: " & Matrix (1, 1).State'Image);
end Export_As_SVG;
end QR_Symbol;
And here's the info output that I do not understand...
GNAT 7.4.0
Copyright 1992-2017, Free Software Foundation, Inc.
Compiling: qr_symbol.adb
Source file time stamp: 2019-12-07 16:29:37
Compiled at: 2019-12-07 16:29:38
==============Error messages for source file: qr_symbol.ads
9. procedure Export_As_SVG;
|
>>> info: "QR_Symbol" requires body ("Export_As_SVG" requires completion)
29 lines: No errors, 1 info message
aarch64-linux-gnu-gnatbind-7 -x main.ali
aarch64-linux-gnu-gnatlink-7 main.ali
Program output (given correct input, does gives correct output)...
$ ./main '' V1
QR Version requested: V 1
in Export_As_SVG()...
Version: 1
Side_Length: 21
Matrix (1, 1).State: UNCOMMITTED
QUESTION:
Why is there an info message suggesting that I need to provide a body for this package when it is clear that I have already done so?
An info message is not used to suggest that you should change your program, only to provide some (useful or not) information. In your case, the information is true. If it weren't fulfilled, it'd turn to an error.
You may want to check if this flag is causing the generation of this message:
According to GNAT User's Guide:
-gnatw.e
`Activate every optional warning.'
This switch activates all optional warnings, including those which are not activated by -gnatwa. The use of this switch is not
recommended for normal use. If you turn this switch on, it is almost
certain that you will get large numbers of useless warnings. The
warnings that are excluded from -gnatwa are typically highly
specialized warnings that are suitable for use only in code that has
been specifically designed according to specialized coding rules.
And if you don't want to remove that switch, at least you can disable this specific info message:
-gnatw.Y
`Disable information messages for why package spec needs body.'
This switch suppresses the output of information messages showing why a package specification needs a body.
When I type "lshosts" I am given:
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
server1 X86_64 Intel_EM 60.0 12 191.9G 159.7G Yes ()
server2 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
server3 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
I am trying to return maxmem and maxswp as megabytes, not gigabytes when lshosts is called. I am trying to send Xilinx ISE jobs to my LSF, however the software expects integer, megabyte values for maxmem and maxswp. By doing debugging, it appears that the software grabs these parameters using the lshosts command.
I have already checked in my lsf.conf file that:
LSF_UNIT_FOR_LIMTS=MB
I have tried searching the IBM Knowledge Base, but to no avail.
Do you use a specific command to specify maxmem and maxswp units within the lsf.conf, lsf.shared, or other config files?
Or does LSF force return the most practical unit?
Any way to override this?
LSF_UNIT_FOR_LIMITS should work, if you completely drained the cluster of all running, pending, and finished jobs. According to the docs, MB is the default, so I'm surprised.
That said, you can use something like this to transform the results:
$ cat to_mb.awk
function to_mb(s) {
e = index("KMG", substr(s, length(s)))
m = substr(s, 0, length(s) - 1)
return m * 10^((e-2) * 3)
}
{ print $1 " " to_mb($6) " " to_mb($7) }
$ lshosts | tail -n +2 | awk -f to_mb.awk
server1 191900 159700
server2 191900 191200
server3 191900 191200
The to_mb function should also handle 'K' or 'M' units, should those pop up.
If LSF_UNIT_FOR_LIMITS is defined in lsf.conf, lshosts will always print the output as a floating point number, and in some versions of LSF the parameter is defined as 'KB' in lsf.conf upon installation.
Try searching for any definitions of the parameter in lsf.conf and commenting them all out so that the parameter is left undefined, I think in that case it defaults to printing it out as an integer in megabytes.
(Don't ask me why it works this way)
What command would give me the output I need for each instance of an error code in a very large log file? The file has records marked by a begin and end with number of characters. Such as:
SR 120
1414760452 0 1 Fri Oct 31 13:00:52 2014 2218714 4
GROVEMR2 scn
../SrxParamIF.m 284
New Exam Started
EN 120
The 5th field is the error code, 2218714 in previous example.
I thought of just grep'ing for the error code and outputting -A lines afterwards; then picking what I needed from that rather than parsing the entire file. That seems easy but my grep/awk/sed usage isn't to that level.
ONLY when error 2274021 is encountered as in the following example I'd like some output as shown.
Show me output such as: egrep ‘Coil:|Connector:|Channels faulted:| First channel:’ ERRORLOG|less
Part of input file of interest:
Mon Nov 24 13:43:37 2014 2274021 1
AWHMRGE3T NSP
SCP:RfHubCanHWO::RfBias 4101
^MException Class: Unknown Severity: Unknown
Function: RF: RF Bias
PSD: VIBRANT Coil: Breast SMI Scan: 1106/14
Coil Fault - Short Circuit
A multicoil bias fault was detected.
.
Connector: Port 1 (P1)
Channels faulted: 0x200
First channel: 10 of 32, counting from 1
Fault value: -2499 mV, Channel: 10->
Output:
Coil: Breast SMI
Connector: Port 1 (P1)
Channels faulted: 0x200
First channel: 10 of 32, counting from 1
Thanks in advance for any pointers!
Try the following (with the convenient adaptations)
#!/usr/bin/perl
use strict;
$/="\nEN "; # register separated by "\nEN "
my $error=2274021; # the error!
while(<>){ # for all registers
next unless /\b$error\b/; # ignore unless error
for my $line ( split(/\n/,$_)){
print "$line\n" if ($line =~ /Coil:|Connector:|Channels faulted:|First channel:/);
}
print "====\n"
}
Is this what you need?