How to handle V8 engine crash when process runs out of memory - node.js

Both node console and Qt5's V8-based QJSEngine can be crashed by the following code:
a = []; for (;;) { a.push("hello"); }
node's output before crash:
FATAL ERROR: JS Allocation failed - process out of memory
QJSEngine's output before crash:
#
# Fatal error in JS
# Allocation failed - process out of memory
#
If I run my QJSEngine test app (see below) under a debugger, it shows a v8::internal::OS::DebugBreak call inside V8 code. If I wrap the code calling QJSEngine::evaluate into __try-__except (SEH), then the app won't crash, but this solution is Windows-specific.
Question: Is there a way to handle v8::internal::OS::DebugBreak in a platform-independent way in node and Qt applications?
=== QJSEngine test code ===
Development environment: QtCreator with Qt5 and Windows SDK 7.1, on Windows XP SP3
QJSEngineTest.pro:
TEMPLATE = app
QT -= gui
QT += core qml
CONFIG -= app_bundle
CONFIG += console
SOURCES += main.cpp
TARGET = QJSEngineTest
main.cpp without SEH (this will crash):
#include <QtQml/QJSEngine>
int main(int, char**)
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
return 0;
}
main.cpp with SEH (this won't crash, outputs "Fatal exception"):
#include <QtQml/QJSEngine>
#include <Windows.h>
void runTest()
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
}
int main(int, char**)
{
__try {
runTest();
} __except(EXCEPTION_EXECUTE_HANDLER) {
qDebug("Fatal exception");
}
return 0;
}

I don't believe there's a cross-platform way to trap V8 fatal errors, but even if there were, or if there were some way to trap them on all the platforms you care about, I'm not sure what that would buy you.
The problem is that V8 uses a global flag that records whether a fatal error has occurred. Once that flag is set, V8 will reject any attempt to create new JavaScript contexts, so there's no point in continuing anyway. Try executing some benign JavaScript code after catching the initial fatal error. If I'm right, you'll get another fatal error right away.
In my opinion the right thing would be for Node and Qt to configure V8 to not raise fatal errors in the first place. Now that V8 supports isolates and memory constraints, process-killing fatal errors are no longer appropriate. Unfortunately it looks like V8's error handling code does not yet fully support those newer features, and still operates with the assumption that out-of-memory conditions are always unrecoverable.

Related

Can someone trace the reason for segmentation fault?

public class Watcher: Object
{
private int _fd;
private uint _watch;
private IOChannel _channel;
private uint8[] _buffer;
private int BUFFER_LENGTH;
public Watcher(string path, Linux.InotifyMaskFlags mask){
_buffer = new uint8[BUFFER_LENGTH];
//➔ Initialize notify subsystem
_fd = Linux.inotify_init();
if(_fd < 0){
error(#"Failed to initialize the notify subsystem: $(strerror(errno))");
}
//➔ actually adding abstraction to linux file descriptor
_channel = new IOChannel.unix_new(_fd);
//➔ watch the channel for given condition
//➔ IOCondition.IN => When the channel is ready for reading , IOCondition.HUP=>Hangup(Error)
_watch = _channel.add_watch(IOCondition.IN | IOCondition.HUP, onNotified);
if(_watch < 0){
error(#"Failed to add watch to channel");
}
//➔ Tell linux kernel to watch for any mask(for ex; access, modify) on a given filepath
var ok = Linux.inotify_add_watch(_fd, path, mask);
if(ok < 0){
error(#"Failed to add watch to path -- $path : $(strerror(errno))");
}
print(#"Watching for $(mask) on $path");
}
protected bool onNotified(IOChannel src, IOCondition condition)
{
if( (condition & IOCondition.HUP) == IOCondition.HUP){
error(#"Received hang up from inotify, can't get update");
}
if( (condition & IOCondition.IN) == IOCondition.IN){
var bytesRead = Posix.read(_fd, _buffer, BUFFER_LENGTH);
Linux.InotifyEvent *pevent = (Linux.InotifyEvent*) _buffer;
handleEvent(*pevent);
}
return true;
}
protected void handleEvent(Linux.InotifyEvent ev){
print("Access Detected!\n");
Posix.exit(0);
}
~Watcher(){
if(_watch != 0){
Source.remove(_watch);
}
if(_fd != -1){
Posix.close(_fd);
}
}
}
int main(string[] args) requires (args.length > 1)
{
var watcher = new Watcher(args[1], Linux.InotifyMaskFlags.ACCESS);
var loop = new MainLoop();
loop.run();
return 0;
}
The above code can be found on "Introducing Vala Programming - Michael Lauer"
Proof of failure:
Image displaying failure on access to the file being watched for access
Terminal 1:
./inotifyWatcher
Terminal 2:
cat
As soon as I access the file, segmentation fault occurs.
I have also tried using gdb for the cause of failure, but it's mostly cryptic for me. I am using parrot(debian/64-bit) on my machine. Also, I am new to this(stackoverflow, linux kernel program).
Vala source line numbers can be included in the binary when compiling with the --debug switch. The line numbers appear in the .debug_line DWARF section of an ELF binary:
valac --debug --pkg linux inotifyWatcher.vala
Run the binary using gdb in the first terminal:
gdb --args ./inotifyWatcher .
(gdb) run
The dot specifies to watch the current directory. Then when the current directory is access with a command like ls the watching program segmentation faults. The output from GDB is:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000401a86 in watcher_onNotified (self=0x412830, src=0x40e6e0, condition=G_IO_IN) at inotifyWatcher.vala:51
51 handleEvent(*pevent);
GDB includes the line number, 51, from the source file and shows the line.
So the problem is to do with reading from the file descriptor then passing the buffer to handleEvent. You probably want to check bytesRead is greater than zero and I'm not sure about the use of pointers in this example. Explicit pointers like that should rarely need to be used in Vala, it may require a change to the binding, e.g. using ref to modify the way the argument is passed.

How do I detect that a process failed to start, using apr_procattr_create from the Apache Portable Runtime on Ubuntu Linux?

I'm using the Apache Portable Runtime to start a process via apr_procattr_create. My failing test case is when the called command does not exist on the system. On Windows, apr_proc_create returns a non-success error code if the executable does not exist. On Linux, I cannot work out how to detect the failure. According to the documentation, apr_procattr_error_check_set might be expected to do the trick, but it does not appear to.
Q: How can I detect that a process failed to start on linux with APR apr_proc_create?
Here's my code:
/**
* Run a command asynchronously
* The command name is the first element of args. The remaining elements are
* the arguments for the command
*/
apr_status_t mynamespace::RunCommandUnchecked(const std::vector<std::string> & args)
{
std::vector<const char*> cArgs;
for (size_t i = 0; i < args.size(); ++i)
cArgs.push_back(args[i].c_str());
cArgs.push_back(nullptr);
apr_procattr_t *procAttr;
apr_procattr_create(&procAttr, this->impl->pool.get_Pool());
// send the process's std out to a temporary file
apr_procattr_child_out_set(procAttr, this->impl->outputFile, nullptr);
// block the process from accessing stdin & stderr on the current process
apr_procattr_child_in_set(procAttr, nullptr, nullptr);
apr_procattr_child_err_set(procAttr, nullptr, nullptr);
// prefer to report errors to the caller.
apr_procattr_error_check_set(procAttr, 1);
// Ensure the path is searched for the command to run
apr_procattr_cmdtype_set(procAttr, APR_PROGRAM_PATH);
return apr_proc_create(&this->impl->proc, cArgs[0], cArgs.data(), nullptr, procAttr, this->impl->pool.get_Pool());
}
My (failing on linux) test case is as follows:
/*
*
* In this test, we execute a command that does not exist. We expect
* a non-success failure code.
**/
void CommandRunnerTests::CommandDoesNotExistUnchecked()
{
mynamespace::CommandRunner runner(app::get_ApplicationLog());
auto rv = runner.RunCommandUnchecked({ "pants-trousers-stockings.exe" });
// We expect a non-success error code to be returned.
// This assert fails on linux.
CPPUNIT_ASSERT(rv != APR_SUCCESS);
#ifdef _WIN32
std::string expected("The system cannot find the file specified.");
#else
std::string expected("command not found");
#endif
auto msg = app::GetAprErrorMessage(rv);
CPPUNIT_ASSERT_STRING_EQUAL(expected, boost::trim_copy(msg));
}
When I execute the same command on the (bash) shell, the output is as follows
me#pc:~/code$ pants-trousers-stockings.exe
pants-trousers-stockings.exe: command not found
me#pc:~/code$ echo $?
127
I'm currently using APR version 1.4.6. I can update to a newer version if there are any relevant changes, but I don't see any in the release notes.
The code works as expected on Windows.
My Linux OS is uBuntu 14.04
Calling apr_proc_wait doesn't work to detect the failure, it just tells me APR_PROC_EXIT (process terminated normally) and APR_CHILD_DONE (child is no longer running).

GoogleTest CMake doesn't recognize TEST_F: Like it's not recognizing GTest something

OK, I admit it, this is a unique case. When we build our application we are using make, so I've included my tests in a test folder under src. Then at the same level as our release folder we have created a unit-test folder that includes all of our source files and our test source files.
But my IDE is CLion, which uses CMake. In my CMakeLists.txt file, I have included:
enable_testing()
find_package(GTest REQUIRED)
include_directories(${GTEST_INCLUDE_DIRS})
add_executable(TestProject ${SOURCE_FILES})
target_link_libraries(TestProject ${GTEST_BOTH_LIBRARIES})
I am creating my first Test Fixture. Here is the code:
#include "OPProperties.h"
#include "gtest/gtest.h"
namespace {
// The fixture for testing class OPPropertiesTestTest.
class OPPropertiesTestTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
OPPropertiesTestTest() {
// You can do set-up work for each test here.
}
virtual ~OPPropertiesTestTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for OPPropertiesTestTest.
};
TEST_F(OPPropertiesTestTest, ThisTestWillFail) {
EXPECT_EQ(0, 2);
}
} // namespace
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Here is an image capture:
Notice the syntax checker errors in my TEST_F function. When I started to type TEST_F code completion is trying to find a Boost Test Function.
Can someone tell me what else I need to add to the CMakeLists.txt file or what I am not doing that GTest functions are not being recognized?
As πάντα ῥεῖ pointed out, I hadn't actually tried to build the code. When I did I first received a linker error for pthread, so we added the following line the the CMakeLists.txt file:
target_link_libraries(OnePrint pthread)
Then I tried again to build and received these errors:
/home/user/gtest-1.7.0/lib/.libs/libgtest.so: undefined reference to `pthread_key_create'
/home/user/gtest-1.7.0/lib/.libs/libgtest.so: undefined reference to `pthread_getspecific'
/home/user/gtest-1.7.0/lib/.libs/libgtest.so: undefined reference to `pthread_key_delete'
/home/user/gtest-1.7.0/lib/.libs/libgtest.so: undefined reference to `pthread_setspecific'
collect2: error: ld returned 1 exit status
So, I ran a search on these errors and found this question.
The answer that worked for me was here.

Memory errors after moving from Visual Studio 2010 to 2012

My library was working fine in Visual Studio 2010 but now whenever I run it compiled in 2012, I get these memory errors:
First-chance exception at 0x76884B32 in Example.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0101F3A0.
First-chance exception at 0x76884B32 in Example.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0101F3A0.
Unhandled exception at at 0x76884B32 in Example.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0101F3A0.
First-chance exception at 0x76FF3541 (ntdll.dll) in Example.exe: 0xC0000005: Access violation reading location 0xFEFEFF02.
According to the call stack, the errors are coming up everytime return wstring(cMD5.hexdigest()); is called from the following code:
wstring GetMachineHash() {
BYTE szHash[48];
LPBYTE pIdentifierHash;
ZeroMemory(szHash, 48);
MachineNameIdentifier cNameId;
pIdentifierHash = cNameId.GetIdentifierHash();
memcpy(szHash, pIdentifierHash, 16);
NetworkAdapterIdentifier cNetAdaptId;
pIdentifierHash = cNetAdaptId.GetIdentifierHash();
memcpy(szHash+16, pIdentifierHash, 16);
VolumeInfoIdentifier cVolInfo;
pIdentifierHash = cVolInfo.GetIdentifierHash();
memcpy(szHash+32, pIdentifierHash, 16);
MD5 cMD5(szHash, 48);
return wstring(cMD5.hexdigest());
}
If your wondering what MD5 class I'm using, its the one port by Frank Thilo but its modified to return LPBYTE instead of std::string like so:
// return 16 byte md5 hash
LPBYTE MD5::hash() const {
if (!finalized)
return 0;
return (LPBYTE)digest;
}
// return hex representation of digest as string
LPTSTR MD5::hexdigest() const
{
if (!finalized)
return NULL;
LPTSTR szBuf = new TCHAR[33];
ZeroMemory(szBuf, 33);
for (int i=0; i<16; i++)
_stprintf_s(szBuf+i*2, 33, _T("%02X"), digest[i]);
szBuf[32]=0;
return szBuf;
}
After doing some research here on SO, it says these errors are because of a possible memory leak somewhere else in my program. But I have looked and everything seems to be freed correctly. Any ideas? I was wondering if this also might have something to do with calling a DLL library from an EXE through LoadLibrary() and GetProcAddress()?
(I should also note that the Platform Toolset is set to Visual Studio 2012 - Windows XP (v110_xp))

OpenCV Multithreading error [using Qt]

Till now I have learnt one thing, there's something wrong I'm doing with OpenCV, Qt has no role in the error
I'm trying to run two methods in different threads, but it gives me error:
[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
Blurring_Images: ../../src/xcb_io.c:178: dequeue_pending_request: Assertion `!xcb_xlib_unknown_req_in_deq' failed.
The program has unexpectedly finished.
Here's my code:
void Dialog::blurImages(int b)
{
QtConcurrent::run(this,&Dialog::homogenour_blur,b);
QtConcurrent::run(this,&Dialog::gaussianBlur,b);
}
void Dialog::homogenour_blur(int b)
{
cv::blur(img,img1,cv::Size(b,b));
showImage("Homogenous Blur",img1);
}
void Dialog::gaussianBlur(int b)
{
cv::GaussianBlur(img,img2,cv::Size(b,b),b);
showImage("Gaussian Blur",img2);
}
whereas if i comment out one call(shown below), it runs fine
void Dialog::blurImages(int b)
{
QtConcurrent::run(this,&Dialog::homogenour_blur,b);
//QtConcurrent::run(this,&Dialog::gaussianBlur,b);
}
It's really annoying guys, please help !!
EDIT:
Instead of calling showImage(), I replaced it with the actial OpenCV call(see below):
void Dialog::homogenour_blur(int b)
{
cv::blur(img,img1,cv::Size(b,b));
//showImage("Homogenous Blur",img1);
cv::imshow("Homogenous Blur",img1);
}
void Dialog::gaussianBlur(int b)
{
cv::GaussianBlur(img,img2,cv::Size(b,b),b);
//showImage("Gaussian Blur",img2);
cv::imshow("Gaussian Blur",img2);
}
Now the error I get is:
Original Image: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Original Image: Fatal IO error 0 (Success) on X server :0.
Fatal Error: Accessed global static 'KGlobalSettings *s_self()' after destruction. Defined at ../../kdeui/kernel/kglobalsettings.cpp:190
The program has unexpectedly finished.
To all concerned:
fix Jaydeep's problems
[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
and
error: ‘XInitThreads’ was not declared in this scope
by linking X11, including xlib, and calling XInitThreads.
Example for including xlib and calling XInitThreads:
// main.cpp
#include <thread>
#include <X11/Xlib.h>
int main() {
XInitThreads();
// . . .
}
Example for linking:
g++ main.cpp -o my_program -std=c++0x -pthread -lX11 /* -pthread if you're on Linux */
Of course, don't forget to link the other files that may be necessary to your application
imshow() method seems not thread safe. if you use python, it can be solved like this:
self.img_lock = threading.Lock()
with self.img_lock:
cv2.imshow()
cv2.waitKey(1)

Resources