How do I detect that a process failed to start, using apr_procattr_create from the Apache Portable Runtime on Ubuntu Linux? - linux

I'm using the Apache Portable Runtime to start a process via apr_procattr_create. My failing test case is when the called command does not exist on the system. On Windows, apr_proc_create returns a non-success error code if the executable does not exist. On Linux, I cannot work out how to detect the failure. According to the documentation, apr_procattr_error_check_set might be expected to do the trick, but it does not appear to.
Q: How can I detect that a process failed to start on linux with APR apr_proc_create?
Here's my code:
/**
* Run a command asynchronously
* The command name is the first element of args. The remaining elements are
* the arguments for the command
*/
apr_status_t mynamespace::RunCommandUnchecked(const std::vector<std::string> & args)
{
std::vector<const char*> cArgs;
for (size_t i = 0; i < args.size(); ++i)
cArgs.push_back(args[i].c_str());
cArgs.push_back(nullptr);
apr_procattr_t *procAttr;
apr_procattr_create(&procAttr, this->impl->pool.get_Pool());
// send the process's std out to a temporary file
apr_procattr_child_out_set(procAttr, this->impl->outputFile, nullptr);
// block the process from accessing stdin & stderr on the current process
apr_procattr_child_in_set(procAttr, nullptr, nullptr);
apr_procattr_child_err_set(procAttr, nullptr, nullptr);
// prefer to report errors to the caller.
apr_procattr_error_check_set(procAttr, 1);
// Ensure the path is searched for the command to run
apr_procattr_cmdtype_set(procAttr, APR_PROGRAM_PATH);
return apr_proc_create(&this->impl->proc, cArgs[0], cArgs.data(), nullptr, procAttr, this->impl->pool.get_Pool());
}
My (failing on linux) test case is as follows:
/*
*
* In this test, we execute a command that does not exist. We expect
* a non-success failure code.
**/
void CommandRunnerTests::CommandDoesNotExistUnchecked()
{
mynamespace::CommandRunner runner(app::get_ApplicationLog());
auto rv = runner.RunCommandUnchecked({ "pants-trousers-stockings.exe" });
// We expect a non-success error code to be returned.
// This assert fails on linux.
CPPUNIT_ASSERT(rv != APR_SUCCESS);
#ifdef _WIN32
std::string expected("The system cannot find the file specified.");
#else
std::string expected("command not found");
#endif
auto msg = app::GetAprErrorMessage(rv);
CPPUNIT_ASSERT_STRING_EQUAL(expected, boost::trim_copy(msg));
}
When I execute the same command on the (bash) shell, the output is as follows
me#pc:~/code$ pants-trousers-stockings.exe
pants-trousers-stockings.exe: command not found
me#pc:~/code$ echo $?
127
I'm currently using APR version 1.4.6. I can update to a newer version if there are any relevant changes, but I don't see any in the release notes.
The code works as expected on Windows.
My Linux OS is uBuntu 14.04
Calling apr_proc_wait doesn't work to detect the failure, it just tells me APR_PROC_EXIT (process terminated normally) and APR_CHILD_DONE (child is no longer running).

Related

Can someone trace the reason for segmentation fault?

public class Watcher: Object
{
private int _fd;
private uint _watch;
private IOChannel _channel;
private uint8[] _buffer;
private int BUFFER_LENGTH;
public Watcher(string path, Linux.InotifyMaskFlags mask){
_buffer = new uint8[BUFFER_LENGTH];
//➔ Initialize notify subsystem
_fd = Linux.inotify_init();
if(_fd < 0){
error(#"Failed to initialize the notify subsystem: $(strerror(errno))");
}
//➔ actually adding abstraction to linux file descriptor
_channel = new IOChannel.unix_new(_fd);
//➔ watch the channel for given condition
//➔ IOCondition.IN => When the channel is ready for reading , IOCondition.HUP=>Hangup(Error)
_watch = _channel.add_watch(IOCondition.IN | IOCondition.HUP, onNotified);
if(_watch < 0){
error(#"Failed to add watch to channel");
}
//➔ Tell linux kernel to watch for any mask(for ex; access, modify) on a given filepath
var ok = Linux.inotify_add_watch(_fd, path, mask);
if(ok < 0){
error(#"Failed to add watch to path -- $path : $(strerror(errno))");
}
print(#"Watching for $(mask) on $path");
}
protected bool onNotified(IOChannel src, IOCondition condition)
{
if( (condition & IOCondition.HUP) == IOCondition.HUP){
error(#"Received hang up from inotify, can't get update");
}
if( (condition & IOCondition.IN) == IOCondition.IN){
var bytesRead = Posix.read(_fd, _buffer, BUFFER_LENGTH);
Linux.InotifyEvent *pevent = (Linux.InotifyEvent*) _buffer;
handleEvent(*pevent);
}
return true;
}
protected void handleEvent(Linux.InotifyEvent ev){
print("Access Detected!\n");
Posix.exit(0);
}
~Watcher(){
if(_watch != 0){
Source.remove(_watch);
}
if(_fd != -1){
Posix.close(_fd);
}
}
}
int main(string[] args) requires (args.length > 1)
{
var watcher = new Watcher(args[1], Linux.InotifyMaskFlags.ACCESS);
var loop = new MainLoop();
loop.run();
return 0;
}
The above code can be found on "Introducing Vala Programming - Michael Lauer"
Proof of failure:
Image displaying failure on access to the file being watched for access
Terminal 1:
./inotifyWatcher
Terminal 2:
cat
As soon as I access the file, segmentation fault occurs.
I have also tried using gdb for the cause of failure, but it's mostly cryptic for me. I am using parrot(debian/64-bit) on my machine. Also, I am new to this(stackoverflow, linux kernel program).
Vala source line numbers can be included in the binary when compiling with the --debug switch. The line numbers appear in the .debug_line DWARF section of an ELF binary:
valac --debug --pkg linux inotifyWatcher.vala
Run the binary using gdb in the first terminal:
gdb --args ./inotifyWatcher .
(gdb) run
The dot specifies to watch the current directory. Then when the current directory is access with a command like ls the watching program segmentation faults. The output from GDB is:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000401a86 in watcher_onNotified (self=0x412830, src=0x40e6e0, condition=G_IO_IN) at inotifyWatcher.vala:51
51 handleEvent(*pevent);
GDB includes the line number, 51, from the source file and shows the line.
So the problem is to do with reading from the file descriptor then passing the buffer to handleEvent. You probably want to check bytesRead is greater than zero and I'm not sure about the use of pointers in this example. Explicit pointers like that should rarely need to be used in Vala, it may require a change to the binding, e.g. using ref to modify the way the argument is passed.

How to completely erase the console output in windows?

I'm trying to erase the windows console programmatically from a NodeJS script. Not just slide the console output out of view...I want to actually clear it.
I'm writing a tool similar to TypeScript's tsc command, where it will watch a folder and incrementally compile the project. As such, on every file change, I rerun the compiler, and output any errors that are found (one line each). I would like to totally erase the console output so that users are not confused by old error messages as they scroll up the console.
When you run tsc --watch in a directory, TypeScript does exactly what I want. tsc actually erases the entire console output.
I've tried all of the following things:
process.stdout.write("\x1Bc");
process.stdout.write('\033c')
var clear = require('cli-clear'); clear();
I tried all of the escape codes from this post.
process.stdout.write("\u001b[2J\u001b[0;0H");
All of these either:
Printed an unknown char to the console
Slid the console down, equivalent to cls, which is NOT what I want.
How do I actually clear the screen and remove ALL of the output? I'm open to using a node module, piping outupt, spawning new cmds, hacks, etc, as long as it gets the job done.
Here's a sample node.js script to test out the issue.
for (var i = 0; i < 15; i++) {
console.log(i + ' --- ' + i);
}
//clear the console output here somehow
Adapted from a previous answer. You will need a C compiler (tested with mingw/gcc)
#include <windows.h>
int main(void){
HANDLE hStdout;
CONSOLE_SCREEN_BUFFER_INFO csbiInfo;
COORD destinationPoint;
SMALL_RECT sourceArea;
CHAR_INFO Fill;
// Get console handle
hStdout = CreateFile( "CONOUT$", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_WRITE, 0, OPEN_EXISTING, 0, 0 );
// Retrieve console information
if (GetConsoleScreenBufferInfo(hStdout, &csbiInfo)) {
// Select all the console buffer as source
sourceArea.Top = 0;
sourceArea.Left = 0;
sourceArea.Bottom = csbiInfo.dwSize.Y - 1;
sourceArea.Right = csbiInfo.dwSize.X - 1;
// Select a place out of the console to move the buffer
destinationPoint.X = 0;
destinationPoint.Y = 0 - csbiInfo.dwSize.Y;
// Configure fill character and attributes
Fill.Char.AsciiChar = ' ';
Fill.Attributes = csbiInfo.wAttributes;
// Move all the information out of the console buffer and init the buffer
ScrollConsoleScreenBuffer( hStdout, &sourceArea, NULL, destinationPoint, &Fill);
// Position the cursor
destinationPoint.X = 0;
destinationPoint.Y = 0;
SetConsoleCursorPosition( hStdout, destinationPoint );
}
return 0;
}
Compiled as clearConsole.exe (or whatever you want), it can be used from node as
const { spawn } = require('child_process');
spawn('clearConsole.exe');

How to handle V8 engine crash when process runs out of memory

Both node console and Qt5's V8-based QJSEngine can be crashed by the following code:
a = []; for (;;) { a.push("hello"); }
node's output before crash:
FATAL ERROR: JS Allocation failed - process out of memory
QJSEngine's output before crash:
#
# Fatal error in JS
# Allocation failed - process out of memory
#
If I run my QJSEngine test app (see below) under a debugger, it shows a v8::internal::OS::DebugBreak call inside V8 code. If I wrap the code calling QJSEngine::evaluate into __try-__except (SEH), then the app won't crash, but this solution is Windows-specific.
Question: Is there a way to handle v8::internal::OS::DebugBreak in a platform-independent way in node and Qt applications?
=== QJSEngine test code ===
Development environment: QtCreator with Qt5 and Windows SDK 7.1, on Windows XP SP3
QJSEngineTest.pro:
TEMPLATE = app
QT -= gui
QT += core qml
CONFIG -= app_bundle
CONFIG += console
SOURCES += main.cpp
TARGET = QJSEngineTest
main.cpp without SEH (this will crash):
#include <QtQml/QJSEngine>
int main(int, char**)
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
return 0;
}
main.cpp with SEH (this won't crash, outputs "Fatal exception"):
#include <QtQml/QJSEngine>
#include <Windows.h>
void runTest()
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
}
int main(int, char**)
{
__try {
runTest();
} __except(EXCEPTION_EXECUTE_HANDLER) {
qDebug("Fatal exception");
}
return 0;
}
I don't believe there's a cross-platform way to trap V8 fatal errors, but even if there were, or if there were some way to trap them on all the platforms you care about, I'm not sure what that would buy you.
The problem is that V8 uses a global flag that records whether a fatal error has occurred. Once that flag is set, V8 will reject any attempt to create new JavaScript contexts, so there's no point in continuing anyway. Try executing some benign JavaScript code after catching the initial fatal error. If I'm right, you'll get another fatal error right away.
In my opinion the right thing would be for Node and Qt to configure V8 to not raise fatal errors in the first place. Now that V8 supports isolates and memory constraints, process-killing fatal errors are no longer appropriate. Unfortunately it looks like V8's error handling code does not yet fully support those newer features, and still operates with the assumption that out-of-memory conditions are always unrecoverable.

How to launch editor like vi or emacs from node.js REPL?

I want to launch an editor like vi or emacs from node.js REPL.
I have tried two approaches till now:
Node Addons
Heres how my editor.cc looks like:
const char *tempFile = "TEMP_FILE"; // File to be opened with the editor
Handle<Value> launchEditor (const Arguments& args) {
const char *editor = "vi";
Local<String> buffer;
pid_t pid = fork();
if (pid == 0) {
execlp(editor, editor, tempFile, NULL);
// Exit with "command-not-found" if above fails.
exit(127);
} else {
waitpid(pid, 0, 0);
char *fileContent = readTempFile(); // Simple file IO code to read file.
buffer = String::New(fileContent);
free(fileContent);
}
return buffer;
}
// MAKE IT A NODE MODULE
void Init(Handle<Object> target) {
target->Set(String::NewSymbol("editor"), FunctionTemplate::New(launchEditor)->GetFunction());
}
NODE_MODULE(editor, Init)
This worked when I had node v0.6.12 (compiled with node-waf), but
when I updated node to v0.8.1, this code stopped working (compiled
with node-gyp). The editor simply didn't appear, and the file
content was read and returned (with emacs) or the editor was running as a background
process (with vi)! Is there anything that I need to change for it to work with 0.8.1?
Even if the editor is launched as background process, can I bring it to foreground from the code itself?
Child_process module
spawn = require('child_process').spawn;
editor = spawn('emacs', ['TEMP_FILE']);
But this is not working properly. With emacs, it shows the error
input is not a tty and vi gives a weird interface.
Can someone help with any of the solutions above, or suggest some other working solution?
I just stumbled upon one about a week ago, you should give it a try: node-editor

Where do writes to stdout go when launched from a cygwin shell, no redirection

I have an application, let's call it myapp.exe, which is dual-mode console/GUI, built as /SUBSYSTEM:WINDOWS (There's a tiny 3KB shim myapp.com to cause cmd.exe to wait to display the new prompt.)
If I launch from a command prompt:
myapp -> cmd.exe runs myapp.com which runs myapp.exe. stdout is initially a detached console, by using AttachConsole and freopen("CONOUT$", "w", stdout) my output appears in the command box. OK
myapp.exe -> cmd.exe displays the prompt too early (known problem), otherwise same as previous. Not a normal usage scenario.
myapp > log -> stdout is a file, normal use of std::cout ends up in the file. OK
If I launch from Windows explorer:
myapp.com -> console is created, stdout is console, output goes into console. Same result as using /SUBSYSTEM:CONSOLE for the entire program, except that I've added a pause when myapp.com is the only process in the console. Not a normal usage scenario.
myapp.exe -> stdout is a NULL handle, I detect this and hook std::cout to a GUI. OK
If I launch from Matlab shell:
system('myapp') or system('myapp.com') or system('myapp.exe') -> For all three variations, stdout is piped to MatLab. OK
If I launch from a cygwin bash shell:
./myapp.com -> Just like launch from cmd.exe, the output appears in the command box. OK
./myapp -> (bash finds ./myapp.exe). This is the broken case. stdout is a non-NULL handle but output goes nowhere. This is the normal situation for running the program from bash and needs to be fixed!
./myapp > log -> Just like launch from cmd.exe with file redirection. OK
./myapp | cat -> Similar to file redirection, except output ends up on the console window. OK
Does anybody know what cygwin sets as stdout when launching a /SUBSYSTEM:WINDOWS process and how I can bind std::cout to it? Or at least tell me how to find out what kind of handle I'm getting back from GetStdHandle(STD_OUTPUT_HANDLE)?
My program is written with Visual C++ 2010, without /clr, in case that matters in any way. OS is Windows 7 64-bit.
EDIT: Additional information requested.
CYGWIN environment variable is empty (or non-existent).
GetFileType() returns FILE_TYPE_UNKNOWN. GetLastError() returns 6 (ERROR_INVALID_HANDLE). It doesn't matter whether I check before or after calling AttachConsole().
However, if I simply ignore the invalid handle and freopen("CONOUT$", "w", stdout) then everything works great. I was just missing a way to distinguish between (busted) console output and file redirection, and GetFileType() provided that.
EDIT: Final code:
bool is_console(HANDLE h)
{
if (!h) return false;
::AttachConsole(ATTACH_PARENT_PROCESS);
if (FILE_TYPE_UNKNOWN == ::GetFileType(h) && ERROR_INVALID_HANDLE == GetLastError()) {
/* workaround cygwin brokenness */
h = ::CreateFile(_T("CONOUT$"), GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL);
if (h) {
::CloseHandle(h);
return true;
}
}
CONSOLE_FONT_INFO cfi;
return ::GetCurrentConsoleFont(h, FALSE, &cfi) != 0;
}
bool init( void )
{
HANDLE out = ::GetStdHandle(STD_OUTPUT_HANDLE);
if (out) {
/* stdout exists, might be console, file, or pipe */
if (is_console(out)) {
#pragma warning(push)
#pragma warning(disable: 4996)
freopen("CONOUT$", "w", stdout);
#pragma warning(pop)
}
//std::stringstream msg;
//DWORD result = ::GetFileType(out);
//DWORD lasterror = ::GetLastError();
//msg << result << std::ends;
//::MessageBoxA(NULL, msg.str().c_str(), "GetFileType", MB_OK);
//if (result == FILE_TYPE_UNKNOWN) {
// msg.str(std::string());
// msg << lasterror << std::ends;
// ::MessageBoxA(NULL, msg.str().c_str(), "GetLastError", MB_OK);
//}
return true;
}
else {
/* no text-mode stdout, launch GUI (actual code removed) */
}
}
The GetFileType() function allows to distinguish between some types of handles, in particular consoles, pipes, files, and broken handles.

Resources