I've been writing a program trying to find itself using the procps library.
But for some reason it smashes the stack.
This is my code:
int main(){
PROCTAB *ptp;
proc_t task;
pid_t mypid[1];
mypid[0] = getpid();
printf("My id: %d\n", mypid[0]);
ptp = openproc(PROC_PID, mypid, 1);
if(readproc(ptp, &task)){
printf("Task id:%d\n",task.XXXID);
}
else{
printf("Error: could not find currect task\n");
}
closeproc(ptp);
printf("Done\n");
return 0;
}
The output i get when i run the program is:
$ ./test
My id is: 8514
Task id is:8514
Done
*** stack smashing detected ***: ./test terminated
======= Backtrace: =========
/lib/i386-linux-gnu/libc.so.6(__fortify_fail+0x45)[0xb7688dd5]
/lib/i386-linux-gnu/libc.so.6(+0xffd8a)[0xb7688d8a]
./test[0x804863e]
/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0xb75a24d3]
./test[0x80484f1]
======= Memory map: ========
...
Aborted (core dumped)
Any one has an idea why it happens?
Am I doing something wrong?
Thanks.
Edit:
I've looked at the header file and notice that I've made a wrong use of the openproc function the correct way to use it is (for pid) is to have the mypid array be null terminated, so I've change my code to:
int main(){
PROCTAB *ptp;
proc_t task;
pid_t mypid[2];
mypid[0] = getpid();
memset(&mypid[1], 0, sizeof(pid_t));
printf("My id: %d\n", mypid[0]);
ptp = openproc(PROC_PID, mypid);
if(readproc(ptp, &task)){
printf("Task id:%d\n",task.XXXID);
}
else{
printf("Error: could not find currect task\n");
}
closeproc(ptp);
printf("Done\n");
return 0;
}
and it still crushes the stack.
It works for me here. After getting that version of procps, it compiled and run fine:
$ gcc -Wall -Werror -o rp -L. -lproc-3.2.8 rp.c
$ ./rp
My id: 11468
Task id:11468
Done
Update
Try a modified version:
proc_t *result;
...
if((result = readproc(ptp, NULL))){
printf("Task id:%d\n",result->XXXID);
free(result);
}
A possible cause for your crash is the fact that the proc_t struct returned by readproc() has additional dynamically allocated elements, such as environment variables or command line arguments. A safer way is to let readproc() allocate the whole structure, and free it later using freeproc():
while ((proc_info = readproc(proc, nullptr)) != NULL) {
// do something with proc_info
freeproc(proc_info);
}
Related
I've always thought that using std::cout << something was thread safe.
For this little example
#include <iostream>
#include <thread>
void f()
{
std::cout << "Hello from f\n";
}
void g()
{
std::cout << "Hello from g\n";
}
int main()
{
std::thread t1(f);
std::thread t2(g);
t1.join();
t2.join();
}
my expectation was that the order of the two outputs would be undefined (and indeed that is what I observe in practice), but that the calls to operator<< are thread safe.
However, ThreadSanitizer, DRD and Helgrind all seem to give various errors regarding access to std::__1::ios_base::width(long) and std::__1::basic_ios<char, std::__1::char_traits >::fill()
On Compiler Explorer I don't see any errors.
On FreeBSD 13, ThreadSanitizer gives me 3 warnings, the two listed above plus the malloc/memcpy to the underlying i/o buffer.
Again in FreeBSD 13, DRD gives 4 errors, width() and fill() times two for the two threads.
Finally FreeBSD 13 Helgrind gives one known false positive related to TLS in thread creation, fill()and width() twice.
On Fedora 34
No errors with g++ 11.2.1 and ThreadSanitizer
DRD complains about malloc/memcpy in fwrite with g++ compiled exe
Helgrind also complains about fwrite and also for the construction of cout, again with the g++ compiled exe
clang++ 12 ThreadSanitizer complains about fill() and width()
DRD with the clang++ compiler exe complains about fill(), width(), fwrite and one other in start_thread
Helgrind with the clang++ exe complains about some TLS, fill(), width(), fwrite
macOS XCode clang++ ThreadSanitizer generates warnings as well (which will be libc++).
Looking at the libc++ and libstdc++ code I don't see anything at all that protects width(). So I don't understand why there are no complaints on compiler explorer.
I tried running with TSAN_OPTIONS=print_suppressions=1 and there was no more output (g++ Fedora ThreadSanitizer)
There does seem to be some consensus over the width() and fill() calls.
Looking more closely at the libstdc++ source I see that there is
(with some trimming and comments):
// ostream_insert.h
// __n is the length of the string pointed to by __s
template<typename _CharT, typename _Traits>
basic_ostream<_CharT, _Traits>&
__ostream_insert(basic_ostream<_CharT, _Traits>& __out,
const _CharT* __s, streamsize __n)
{
typedef basic_ostream<_CharT, _Traits> __ostream_type;
typedef typename __ostream_type::ios_base __ios_base;
typename __ostream_type::sentry __cerb(__out);
if (__cerb)
{
__try
{
const streamsize __w = __out.width();
if (__w > __n)
{
// snipped
// handle padding
}
else
__ostream_write(__out, __s, __n);
// why no hazard here?
__out.width(0);
}
__out is the stream object, global cout in this case. I don't see anything like locks or atomics.
Any suggestions as to how ThreadSanitizer/g++ is getting a "clean" output?
There is this somewhat cryptic comment
template<typename _CharT, typename _Traits>
basic_ostream<_CharT, _Traits>::sentry::
sentry(basic_ostream<_CharT, _Traits>& __os)
: _M_ok(false), _M_os(__os)
{
// XXX MT
if (__os.tie() && __os.good())
__os.tie()->flush();
The libc++ code looks similar. In iostream
template<class _CharT, class _Traits>
basic_ostream<_CharT, _Traits>&
__put_character_sequence(basic_ostream<_CharT, _Traits>& __os,
const _CharT* __str, size_t __len)
{
#ifndef _LIBCPP_NO_EXCEPTIONS
try
{
#endif // _LIBCPP_NO_EXCEPTIONS
typename basic_ostream<_CharT, _Traits>::sentry __s(__os);
if (__s)
{
typedef ostreambuf_iterator<_CharT, _Traits> _Ip;
if (__pad_and_output(_Ip(__os),
__str,
(__os.flags() & ios_base::adjustfield) == ios_base::left ?
__str + __len :
__str,
__str + __len,
__os,
__os.fill()).failed())
__os.setstate(ios_base::badbit | ios_base::failbit);
and in locale
template <class _CharT, class _OutputIterator>
_LIBCPP_HIDDEN
_OutputIterator
__pad_and_output(_OutputIterator __s,
const _CharT* __ob, const _CharT* __op, const _CharT* __oe,
ios_base& __iob, _CharT __fl)
{
streamsize __sz = __oe - __ob;
streamsize __ns = __iob.width();
if (__ns > __sz)
__ns -= __sz;
else
__ns = 0;
for (;__ob < __op; ++__ob, ++__s)
*__s = *__ob;
for (; __ns; --__ns, ++__s)
*__s = __fl;
for (; __ob < __oe; ++__ob, ++__s)
*__s = *__ob;
__iob.width(0);
return __s;
}
Again I see no thread protection, but also this time the tools detect a hazard.
Are these real issues? For plain calls to operator<< the value of width doesn't change, and is always 0.
libstdc++ does not produce the error while libc++ does.
iostream.objects.overview Concurrent access to a synchronized ([ios.members.static]) standard iostream object's formatted and unformatted input ([istream]) and output ([ostream]) functions or a standard C stream by multiple threads does not result in a data race ([intro.multithread]).
So this looks like a libc++ bug to me.
I got the answer from Jonathan Wakely. Makes me feel rather stupid.
The difference is that on Fedora, libstdc++.so contains an explicit instantiation of the iostream classes. libstdc++.so isn't instrumented for ThreadSanitizer so it cannot detect any hazards related to it.
public class Watcher: Object
{
private int _fd;
private uint _watch;
private IOChannel _channel;
private uint8[] _buffer;
private int BUFFER_LENGTH;
public Watcher(string path, Linux.InotifyMaskFlags mask){
_buffer = new uint8[BUFFER_LENGTH];
//➔ Initialize notify subsystem
_fd = Linux.inotify_init();
if(_fd < 0){
error(#"Failed to initialize the notify subsystem: $(strerror(errno))");
}
//➔ actually adding abstraction to linux file descriptor
_channel = new IOChannel.unix_new(_fd);
//➔ watch the channel for given condition
//➔ IOCondition.IN => When the channel is ready for reading , IOCondition.HUP=>Hangup(Error)
_watch = _channel.add_watch(IOCondition.IN | IOCondition.HUP, onNotified);
if(_watch < 0){
error(#"Failed to add watch to channel");
}
//➔ Tell linux kernel to watch for any mask(for ex; access, modify) on a given filepath
var ok = Linux.inotify_add_watch(_fd, path, mask);
if(ok < 0){
error(#"Failed to add watch to path -- $path : $(strerror(errno))");
}
print(#"Watching for $(mask) on $path");
}
protected bool onNotified(IOChannel src, IOCondition condition)
{
if( (condition & IOCondition.HUP) == IOCondition.HUP){
error(#"Received hang up from inotify, can't get update");
}
if( (condition & IOCondition.IN) == IOCondition.IN){
var bytesRead = Posix.read(_fd, _buffer, BUFFER_LENGTH);
Linux.InotifyEvent *pevent = (Linux.InotifyEvent*) _buffer;
handleEvent(*pevent);
}
return true;
}
protected void handleEvent(Linux.InotifyEvent ev){
print("Access Detected!\n");
Posix.exit(0);
}
~Watcher(){
if(_watch != 0){
Source.remove(_watch);
}
if(_fd != -1){
Posix.close(_fd);
}
}
}
int main(string[] args) requires (args.length > 1)
{
var watcher = new Watcher(args[1], Linux.InotifyMaskFlags.ACCESS);
var loop = new MainLoop();
loop.run();
return 0;
}
The above code can be found on "Introducing Vala Programming - Michael Lauer"
Proof of failure:
Image displaying failure on access to the file being watched for access
Terminal 1:
./inotifyWatcher
Terminal 2:
cat
As soon as I access the file, segmentation fault occurs.
I have also tried using gdb for the cause of failure, but it's mostly cryptic for me. I am using parrot(debian/64-bit) on my machine. Also, I am new to this(stackoverflow, linux kernel program).
Vala source line numbers can be included in the binary when compiling with the --debug switch. The line numbers appear in the .debug_line DWARF section of an ELF binary:
valac --debug --pkg linux inotifyWatcher.vala
Run the binary using gdb in the first terminal:
gdb --args ./inotifyWatcher .
(gdb) run
The dot specifies to watch the current directory. Then when the current directory is access with a command like ls the watching program segmentation faults. The output from GDB is:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000401a86 in watcher_onNotified (self=0x412830, src=0x40e6e0, condition=G_IO_IN) at inotifyWatcher.vala:51
51 handleEvent(*pevent);
GDB includes the line number, 51, from the source file and shows the line.
So the problem is to do with reading from the file descriptor then passing the buffer to handleEvent. You probably want to check bytesRead is greater than zero and I'm not sure about the use of pointers in this example. Explicit pointers like that should rarely need to be used in Vala, it may require a change to the binding, e.g. using ref to modify the way the argument is passed.
In an old question about how to catch python stdout in C++ code, there is a good answer and it works - but only in Python 2.
I would like to use something like that with Python 3. Anyone could help me here?
UPDATE
The code I am using is below. It was ported from Mark answer cited above, the only change was the use of PyBytes_AsString instead of PyString_AsString, as cited in documentation.
#include <Python.h>
#include <string>
int main(int argc, char** argv)
{
std::string stdOutErr =
"import sys\n\
class CatchOutErr:\n\
def __init__(self):\n\
self.value = ''\n\
def write(self, txt):\n\
self.value += txt\n\
catchOutErr = CatchOutErr()\n\
sys.stdout = catchOutErr\n\
sys.stderr = catchOutErr\n\
"; //this is python code to redirect stdouts/stderr
Py_Initialize();
PyObject *pModule = PyImport_AddModule("__main__"); //create main module
PyRun_SimpleString(stdOutErr.c_str()); //invoke code to redirect
PyRun_SimpleString("print(1+1)"); //this is ok stdout
PyRun_SimpleString("1+a"); //this creates an error
PyObject *catcher = PyObject_GetAttrString(pModule,"catchOutErr"); //get our catchOutErr created above
PyErr_Print(); //make python print any errors
PyObject *output = PyObject_GetAttrString(catcher,"value"); //get the stdout and stderr from our catchOutErr object
printf("Here's the output:\n %s", PyBytes_AsString(output)); //it's not in our C++ portion
Py_Finalize();
return 0;
}
I build it using Python 3 library:
g++ -I/usr/include/python3.6m -Wall -Werror -fpic code.cpp -lpython3.6m
and the output is:
Here's the output:
(null)
If someone needs more information about the question, please let me know and I will try provide here.
Your issue is that .value isn't a bytes object, it is a string (i.e. Python2 unicode) object. Therefore PyBytes_AsString fails. We can convert it to a bytes object with PyUnicode_AsEncodedString.
PyObject *output = PyObject_GetAttrString(catcher,"value"); //get the stdout and stderr from our catchOutErr
PyObject* encoded = PyUnicode_AsEncodedString(output,"utf-8","strict");
printf("Here's the output:\n %s", PyBytes_AsString(encoded));
Note that you should be checking these result PyObject* against NULL to see if an error has occurred.
I'm using the Apache Portable Runtime to start a process via apr_procattr_create. My failing test case is when the called command does not exist on the system. On Windows, apr_proc_create returns a non-success error code if the executable does not exist. On Linux, I cannot work out how to detect the failure. According to the documentation, apr_procattr_error_check_set might be expected to do the trick, but it does not appear to.
Q: How can I detect that a process failed to start on linux with APR apr_proc_create?
Here's my code:
/**
* Run a command asynchronously
* The command name is the first element of args. The remaining elements are
* the arguments for the command
*/
apr_status_t mynamespace::RunCommandUnchecked(const std::vector<std::string> & args)
{
std::vector<const char*> cArgs;
for (size_t i = 0; i < args.size(); ++i)
cArgs.push_back(args[i].c_str());
cArgs.push_back(nullptr);
apr_procattr_t *procAttr;
apr_procattr_create(&procAttr, this->impl->pool.get_Pool());
// send the process's std out to a temporary file
apr_procattr_child_out_set(procAttr, this->impl->outputFile, nullptr);
// block the process from accessing stdin & stderr on the current process
apr_procattr_child_in_set(procAttr, nullptr, nullptr);
apr_procattr_child_err_set(procAttr, nullptr, nullptr);
// prefer to report errors to the caller.
apr_procattr_error_check_set(procAttr, 1);
// Ensure the path is searched for the command to run
apr_procattr_cmdtype_set(procAttr, APR_PROGRAM_PATH);
return apr_proc_create(&this->impl->proc, cArgs[0], cArgs.data(), nullptr, procAttr, this->impl->pool.get_Pool());
}
My (failing on linux) test case is as follows:
/*
*
* In this test, we execute a command that does not exist. We expect
* a non-success failure code.
**/
void CommandRunnerTests::CommandDoesNotExistUnchecked()
{
mynamespace::CommandRunner runner(app::get_ApplicationLog());
auto rv = runner.RunCommandUnchecked({ "pants-trousers-stockings.exe" });
// We expect a non-success error code to be returned.
// This assert fails on linux.
CPPUNIT_ASSERT(rv != APR_SUCCESS);
#ifdef _WIN32
std::string expected("The system cannot find the file specified.");
#else
std::string expected("command not found");
#endif
auto msg = app::GetAprErrorMessage(rv);
CPPUNIT_ASSERT_STRING_EQUAL(expected, boost::trim_copy(msg));
}
When I execute the same command on the (bash) shell, the output is as follows
me#pc:~/code$ pants-trousers-stockings.exe
pants-trousers-stockings.exe: command not found
me#pc:~/code$ echo $?
127
I'm currently using APR version 1.4.6. I can update to a newer version if there are any relevant changes, but I don't see any in the release notes.
The code works as expected on Windows.
My Linux OS is uBuntu 14.04
Calling apr_proc_wait doesn't work to detect the failure, it just tells me APR_PROC_EXIT (process terminated normally) and APR_CHILD_DONE (child is no longer running).
Both node console and Qt5's V8-based QJSEngine can be crashed by the following code:
a = []; for (;;) { a.push("hello"); }
node's output before crash:
FATAL ERROR: JS Allocation failed - process out of memory
QJSEngine's output before crash:
#
# Fatal error in JS
# Allocation failed - process out of memory
#
If I run my QJSEngine test app (see below) under a debugger, it shows a v8::internal::OS::DebugBreak call inside V8 code. If I wrap the code calling QJSEngine::evaluate into __try-__except (SEH), then the app won't crash, but this solution is Windows-specific.
Question: Is there a way to handle v8::internal::OS::DebugBreak in a platform-independent way in node and Qt applications?
=== QJSEngine test code ===
Development environment: QtCreator with Qt5 and Windows SDK 7.1, on Windows XP SP3
QJSEngineTest.pro:
TEMPLATE = app
QT -= gui
QT += core qml
CONFIG -= app_bundle
CONFIG += console
SOURCES += main.cpp
TARGET = QJSEngineTest
main.cpp without SEH (this will crash):
#include <QtQml/QJSEngine>
int main(int, char**)
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
return 0;
}
main.cpp with SEH (this won't crash, outputs "Fatal exception"):
#include <QtQml/QJSEngine>
#include <Windows.h>
void runTest()
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
}
int main(int, char**)
{
__try {
runTest();
} __except(EXCEPTION_EXECUTE_HANDLER) {
qDebug("Fatal exception");
}
return 0;
}
I don't believe there's a cross-platform way to trap V8 fatal errors, but even if there were, or if there were some way to trap them on all the platforms you care about, I'm not sure what that would buy you.
The problem is that V8 uses a global flag that records whether a fatal error has occurred. Once that flag is set, V8 will reject any attempt to create new JavaScript contexts, so there's no point in continuing anyway. Try executing some benign JavaScript code after catching the initial fatal error. If I'm right, you'll get another fatal error right away.
In my opinion the right thing would be for Node and Qt to configure V8 to not raise fatal errors in the first place. Now that V8 supports isolates and memory constraints, process-killing fatal errors are no longer appropriate. Unfortunately it looks like V8's error handling code does not yet fully support those newer features, and still operates with the assumption that out-of-memory conditions are always unrecoverable.