How to catch Python 3 stdout in C++ code - python-3.x

In an old question about how to catch python stdout in C++ code, there is a good answer and it works - but only in Python 2.
I would like to use something like that with Python 3. Anyone could help me here?
UPDATE
The code I am using is below. It was ported from Mark answer cited above, the only change was the use of PyBytes_AsString instead of PyString_AsString, as cited in documentation.
#include <Python.h>
#include <string>
int main(int argc, char** argv)
{
std::string stdOutErr =
"import sys\n\
class CatchOutErr:\n\
def __init__(self):\n\
self.value = ''\n\
def write(self, txt):\n\
self.value += txt\n\
catchOutErr = CatchOutErr()\n\
sys.stdout = catchOutErr\n\
sys.stderr = catchOutErr\n\
"; //this is python code to redirect stdouts/stderr
Py_Initialize();
PyObject *pModule = PyImport_AddModule("__main__"); //create main module
PyRun_SimpleString(stdOutErr.c_str()); //invoke code to redirect
PyRun_SimpleString("print(1+1)"); //this is ok stdout
PyRun_SimpleString("1+a"); //this creates an error
PyObject *catcher = PyObject_GetAttrString(pModule,"catchOutErr"); //get our catchOutErr created above
PyErr_Print(); //make python print any errors
PyObject *output = PyObject_GetAttrString(catcher,"value"); //get the stdout and stderr from our catchOutErr object
printf("Here's the output:\n %s", PyBytes_AsString(output)); //it's not in our C++ portion
Py_Finalize();
return 0;
}
I build it using Python 3 library:
g++ -I/usr/include/python3.6m -Wall -Werror -fpic code.cpp -lpython3.6m
and the output is:
Here's the output:
(null)
If someone needs more information about the question, please let me know and I will try provide here.

Your issue is that .value isn't a bytes object, it is a string (i.e. Python2 unicode) object. Therefore PyBytes_AsString fails. We can convert it to a bytes object with PyUnicode_AsEncodedString.
PyObject *output = PyObject_GetAttrString(catcher,"value"); //get the stdout and stderr from our catchOutErr
PyObject* encoded = PyUnicode_AsEncodedString(output,"utf-8","strict");
printf("Here's the output:\n %s", PyBytes_AsString(encoded));
Note that you should be checking these result PyObject* against NULL to see if an error has occurred.

Related

Thread safety of std::cout insertion operator

I've always thought that using std::cout << something was thread safe.
For this little example
#include <iostream>
#include <thread>
void f()
{
std::cout << "Hello from f\n";
}
void g()
{
std::cout << "Hello from g\n";
}
int main()
{
std::thread t1(f);
std::thread t2(g);
t1.join();
t2.join();
}
my expectation was that the order of the two outputs would be undefined (and indeed that is what I observe in practice), but that the calls to operator<< are thread safe.
However, ThreadSanitizer, DRD and Helgrind all seem to give various errors regarding access to std::__1::ios_base::width(long) and std::__1::basic_ios<char, std::__1::char_traits >::fill()
On Compiler Explorer I don't see any errors.
On FreeBSD 13, ThreadSanitizer gives me 3 warnings, the two listed above plus the malloc/memcpy to the underlying i/o buffer.
Again in FreeBSD 13, DRD gives 4 errors, width() and fill() times two for the two threads.
Finally FreeBSD 13 Helgrind gives one known false positive related to TLS in thread creation, fill()and width() twice.
On Fedora 34
No errors with g++ 11.2.1 and ThreadSanitizer
DRD complains about malloc/memcpy in fwrite with g++ compiled exe
Helgrind also complains about fwrite and also for the construction of cout, again with the g++ compiled exe
clang++ 12 ThreadSanitizer complains about fill() and width()
DRD with the clang++ compiler exe complains about fill(), width(), fwrite and one other in start_thread
Helgrind with the clang++ exe complains about some TLS, fill(), width(), fwrite
macOS XCode clang++ ThreadSanitizer generates warnings as well (which will be libc++).
Looking at the libc++ and libstdc++ code I don't see anything at all that protects width(). So I don't understand why there are no complaints on compiler explorer.
I tried running with TSAN_OPTIONS=print_suppressions=1 and there was no more output (g++ Fedora ThreadSanitizer)
There does seem to be some consensus over the width() and fill() calls.
Looking more closely at the libstdc++ source I see that there is
(with some trimming and comments):
// ostream_insert.h
// __n is the length of the string pointed to by __s
template<typename _CharT, typename _Traits>
basic_ostream<_CharT, _Traits>&
__ostream_insert(basic_ostream<_CharT, _Traits>& __out,
const _CharT* __s, streamsize __n)
{
typedef basic_ostream<_CharT, _Traits> __ostream_type;
typedef typename __ostream_type::ios_base __ios_base;
typename __ostream_type::sentry __cerb(__out);
if (__cerb)
{
__try
{
const streamsize __w = __out.width();
if (__w > __n)
{
// snipped
// handle padding
}
else
__ostream_write(__out, __s, __n);
// why no hazard here?
__out.width(0);
}
__out is the stream object, global cout in this case. I don't see anything like locks or atomics.
Any suggestions as to how ThreadSanitizer/g++ is getting a "clean" output?
There is this somewhat cryptic comment
template<typename _CharT, typename _Traits>
basic_ostream<_CharT, _Traits>::sentry::
sentry(basic_ostream<_CharT, _Traits>& __os)
: _M_ok(false), _M_os(__os)
{
// XXX MT
if (__os.tie() && __os.good())
__os.tie()->flush();
The libc++ code looks similar. In iostream
template<class _CharT, class _Traits>
basic_ostream<_CharT, _Traits>&
__put_character_sequence(basic_ostream<_CharT, _Traits>& __os,
const _CharT* __str, size_t __len)
{
#ifndef _LIBCPP_NO_EXCEPTIONS
try
{
#endif // _LIBCPP_NO_EXCEPTIONS
typename basic_ostream<_CharT, _Traits>::sentry __s(__os);
if (__s)
{
typedef ostreambuf_iterator<_CharT, _Traits> _Ip;
if (__pad_and_output(_Ip(__os),
__str,
(__os.flags() & ios_base::adjustfield) == ios_base::left ?
__str + __len :
__str,
__str + __len,
__os,
__os.fill()).failed())
__os.setstate(ios_base::badbit | ios_base::failbit);
and in locale
template <class _CharT, class _OutputIterator>
_LIBCPP_HIDDEN
_OutputIterator
__pad_and_output(_OutputIterator __s,
const _CharT* __ob, const _CharT* __op, const _CharT* __oe,
ios_base& __iob, _CharT __fl)
{
streamsize __sz = __oe - __ob;
streamsize __ns = __iob.width();
if (__ns > __sz)
__ns -= __sz;
else
__ns = 0;
for (;__ob < __op; ++__ob, ++__s)
*__s = *__ob;
for (; __ns; --__ns, ++__s)
*__s = __fl;
for (; __ob < __oe; ++__ob, ++__s)
*__s = *__ob;
__iob.width(0);
return __s;
}
Again I see no thread protection, but also this time the tools detect a hazard.
Are these real issues? For plain calls to operator<< the value of width doesn't change, and is always 0.
libstdc++ does not produce the error while libc++ does.
iostream.objects.overview Concurrent access to a synchronized ([ios.members.static]) standard iostream object's formatted and unformatted input ([istream]) and output ([ostream]) functions or a standard C stream by multiple threads does not result in a data race ([intro.multithread]).
So this looks like a libc++ bug to me.
I got the answer from Jonathan Wakely. Makes me feel rather stupid.
The difference is that on Fedora, libstdc++.so contains an explicit instantiation of the iostream classes. libstdc++.so isn't instrumented for ThreadSanitizer so it cannot detect any hazards related to it.

CEREAL failing to serialise - failed to read from input stream exception

I found a particular 100MB bin file (CarveObj_k5_rgbThreshold10_triangleCameraMatches.bin in minimal example), where cereal fails to load throwing exception "Failed to read 368 bytes from input stream! Read 288"
The respective 900MB XML file (CarveObj_k5_rgbThreshold10_triangleCameraMatches.xml in minimal example), built from the same data, loads normally.
The XML file was produced by
// {
// std::ofstream outFile(base + "_triangleCameraMatches.xml");
// cereal::XMLOutputArchive oarchive(outFile);
// oarchive(m_triangleCameraMatches);
// }
and the binary version was produced by
// {
// std::ofstream outFile(base + "_triangleCameraMatches.bin");
// cereal::BinaryOutputArchive oarchive(outFile);
// oarchive(m_triangleCameraMatches);
// }
Minimal example: https://www.dropbox.com/sh/fu9e8km0mwbhxvu/AAAfrbqn_9Tnokj4BVXB8miea?dl=0
Version of Cereal used: 1.3.0
MSVS 2017
Windows 10
Is this a bug? Am I missing something obvious?
Created a bug report in the meanwhile: https://github.com/USCiLab/cereal/issues/607
In this particular instance, the "failed to read from input stream exception" thrown from line 105 of binary.hpp arises because the ios::binary flag is missing from the ifstream constructor call. (This is needed, otherwise ifstream will attempt to interpret some of the file contents as carriage return and linefeed characters. See this question for more information.)
So the few lines of code in your minimal example that read from the .bin file should look like this:
vector<vector<float>> testInBinary;
{
std::ifstream is("CarveObj_k5_rgbThreshold10_triangleCameraMatches.bin", ios::binary);
cereal::BinaryInputArchive iarchive(is);
iarchive(testInBinary);
}
However, even after this is fixed there does also seem to be another problem with the data in that particular .bin file, as when I try to read it I get a different exception thrown, seemingly arising from an incorrectly encoded size value. I don't know if this is an artefact of copying to/from Dropbox though.
There doesn't seem to be a fundamental 100MB limit on Cereal binary files. The following minimal example creates a binary file of around 256MB and reads it back fine:
#include <iostream>
#include <fstream>
#include <vector>
#include <cereal/types/vector.hpp>
#include <cereal/types/memory.hpp>
#include <cereal/archives/xml.hpp>
#include <cereal/archives/binary.hpp>
using namespace std;
int main(int argc, char* argv[])
{
vector<vector<double>> test;
test.resize(32768, vector<double>(1024, -1.2345));
{
std::ofstream outFile("test.bin");
cereal::BinaryOutputArchive oarchive(outFile, ios::binary);
oarchive(test);
}
vector<vector<double>> testInBinary;
{
std::ifstream is("test.bin", ios::binary);
cereal::BinaryInputArchive iarchive(is);
iarchive(testInBinary);
}
return 0;
}
It might be worth noting that in your example code on Dropbox, you're also missing the ios::binary flag on the ofstream constructor when you're writing the .bin file:
/// Produced by:
// {
// std::ofstream outFile(base + "_triangleCameraMatches.bin");
// cereal::BinaryOutputArchive oarchive(outFile);
// oarchive(m_triangleCameraMatches);
// }
It might be worth trying with the flag set. Hope some of this helps.

Having trouble sending a message to ROS

I am pretty new in ROS. I am just trying to publish a message to a node in a linux server with this code:
#include "stdafx.h"
#include "ros.h"
#include <string>
#include <stdio.h>
#include <Windows.h>
using std::string;
int _tmain(int argc, _TCHAR * argv[])
{
ros::NodeHandle nh;
char *ros_master = "*.*.*.*";
printf("Connecting to server at %s\n", ros_master);
nh.initNode(ros_master);
printf("Advertising cmd_vel message\n");
string sent = "Hello robot";
ros::Publisher cmd_vel_pub("try", sent);
nh.advertise(cmd_vel_pub);
printf("All done!\n");
return 0;
}
The compiler gives me these errors:
Error C2664 'ros::Publisher::Publisher(ros::Publisher &&)': cannot convert argument 2 from 'std::string' to 'ros::Msg *' LeapMotion c:\users\vive-vr-pc\documents\visual studio 2015\projects\leapmotion\leapmotion\leapmotion.cpp 22
Error (active) no instance of constructor "ros::Publisher::Publisher" matches the argument list LeapMotion c:\Users\Vive-VR-PC\Documents\Visual Studio 2015\Projects\LeapMotion\LeapMotion\LeapMotion.cpp 22
I am on Visual Studio and there aren't a lot of tutorial from windows to linux, so I am confused on what to do. Many thanks for the help! :D
Take a look at the Hello World example. You cannot send types which are not defined as messages, i.e. std::string is not a ros message type. What you need is
#include <std_msgs/String.h>
Define and fill the string messages
std_msgs::String sent;
ros::Publisher cmd_vel_pub("try", &sent);
nh.advertise(cmd_vel_pub);
ros::Rate r(1); // once a second
sent.data = "Hello robot";
while (n.ok()){
cmd_vel_pun.publish(sent);
ros::spinOnce();
r.sleep();
}
Check out this blabbler example and these tutorials.

Running LLVM passes on Windows 10 gives no output in terminal?

I've the sample pass code from LLVM.org:
#include "llvm/Pass.h"
#include "llvm/IR/Function.h"
#include "llvm/Support/raw_ostream.h"
using namespace llvm;
namespace {
struct Hello : public FunctionPass {
static char ID;
Hello() : FunctionPass(ID) {}
bool runOnFunction(Function &F) override {
errs() << "Hello: ";
errs().write_escaped(F.getName()) << '\n';
return false;
}
}; // end of struct Hello
} // end of anonymous namespace
char Hello::ID = 0;
static RegisterPass<Hello> X("hello", "Hello World Pass",
false /* Only looks at CFG */,
false /* Analysis Pass */);
The project builds fine and creates a SkeletonPass.dll.
When I execute the command:
C:\Users\nlykkei\Projects\llvm-pass-tutorial\build>opt -load skeleton\Debug\SkeletonPass.dll -hello foo.bc
opt: Unknown command line argument '-hello'. Try: 'opt -help'
opt: Did you mean '-help'?
opt doesn't recognize -hello option, even thus everything works fine on Ubuntu 16.04.
In addition, if I execute:
clang -Xclang -load -Xclang skeleton\Debug\SkeletonPass.dll foo.bc
nothing is printed out on Visual Studio terminal (Native Tools Command Prompt x86). On Linux, the function names are printed nicely for the same bitcode file.
What can be the reason for my experience? I do exactly the same on Windows 10 as I do on Ubuntu, but very different results.
Plugins are special beasts on Windows, because the latter does not support proper dynamic linking, so, your pass simply does not register itself in the PassRegistry. So you'd either need to compile all the LLVM into .dll or link your pass statically into opt / clang.

Linux alternative to _NSGetExecutablePath?

Is it possible to side-step _NSGetExecutablePath on Ubuntu Linux in place of a non-Apple specific approach?
I am trying to compile the following code on Ubuntu: https://github.com/Bohdan-Khomtchouk/HeatmapGenerator/blob/master/HeatmapGenerator2_Macintosh_OSX.cxx
As per this prior question that I asked: fatal error: mach-o/dyld.h: No such file or directory, I decided to comment out line 52 and am wondering if there is a general cross-platform (non-Apple specific) way that I can rewrite the code block of line 567 (the _NSGetExecutablePath block) in a manner that is non-Apple specific.
Alen Stojanov's answer to Programmatically retrieving the absolute path of an OS X command-line app and also How do you determine the full path of the currently running executable in go? gave me some ideas on where to start but I want to make certain that I am on the right track here before I go about doing this.
Is there a way to modify _NSGetExecutablePath to be compatible with Ubuntu Linux?
Currently, I am experiencing the following compiler error:
HeatmapGenerator_Macintosh_OSX.cxx:568:13: error: use of undeclared identifier
'_NSGetExecutablePath'
if (_NSGetExecutablePath(path, &size) == 0)
Basic idea how to do it in a way that should be portable across POSIX systems:
#define _XOPEN_SOURCE 500
#include <stdio.h>
#include <limits.h>
#include <stdlib.h>
static char *path;
const char *appPath(void)
{
return path;
}
static void cleanup()
{
free(path);
}
int main(int argc, char **argv)
{
path = realpath(argv[0], 0);
if (!path)
{
perror("realpath");
return 1;
}
atexit(&cleanup);
printf("App path: %s\n", appPath());
return 0;
}
You can define an own module for it, just pass it argv[0] and export the appPath() function from a header.
edit: replaced exported variable by accessor method

Resources