How do i log errors while writing custom varnish module? - varnish

I am learning varnish and about extending the varnish vmod with inline c code. And I am starting it with writing my own custom varnish module. I want to log errors and failure from my custom module. How do i achieve that?
I have options to choose from various logging libraries available for C. But i want to check if there is any inbuilt varnish library to make use of it. Below is my sample code of a vmod c file.
#include "vrt.h"
#include "cache/cache.h"
#include "vcc_if.h"
#include <jansson.h>
#define JSON_ERROR "-1"
#define JSON_LOC "/etc/example.json"
VCL_STRING
vmod_validate_mymod(VRT_CTX) {
(void) ctx;
char *return_code = "0";
json_t *jobj;
json_error_t error;
jobj = json_load_file(JSON_LOC,0,&error);
if (!jobj) {
// error log here
return JSON_ERROR;
}
return return_code;
}
I want en error log line to be added in a cutom log file when the the if condition in the code above is true. Please help.

You want VSLb:
VSLb(ctx->vsl, SLT_VCL_Log, "%d", 5);
If you need to build larger string, or need allocations, use the WS_* functions, their allocations are freed at the end of the rquest automatically.
See how std.log() does it: https://github.com/varnishcache/varnish-cache/blob/389d7ba28e0d0e3a2d5c30a959aa517e5166b246/vmod/vmod_std.c#L145-L153

Related

CEREAL failing to serialise - failed to read from input stream exception

I found a particular 100MB bin file (CarveObj_k5_rgbThreshold10_triangleCameraMatches.bin in minimal example), where cereal fails to load throwing exception "Failed to read 368 bytes from input stream! Read 288"
The respective 900MB XML file (CarveObj_k5_rgbThreshold10_triangleCameraMatches.xml in minimal example), built from the same data, loads normally.
The XML file was produced by
// {
// std::ofstream outFile(base + "_triangleCameraMatches.xml");
// cereal::XMLOutputArchive oarchive(outFile);
// oarchive(m_triangleCameraMatches);
// }
and the binary version was produced by
// {
// std::ofstream outFile(base + "_triangleCameraMatches.bin");
// cereal::BinaryOutputArchive oarchive(outFile);
// oarchive(m_triangleCameraMatches);
// }
Minimal example: https://www.dropbox.com/sh/fu9e8km0mwbhxvu/AAAfrbqn_9Tnokj4BVXB8miea?dl=0
Version of Cereal used: 1.3.0
MSVS 2017
Windows 10
Is this a bug? Am I missing something obvious?
Created a bug report in the meanwhile: https://github.com/USCiLab/cereal/issues/607
In this particular instance, the "failed to read from input stream exception" thrown from line 105 of binary.hpp arises because the ios::binary flag is missing from the ifstream constructor call. (This is needed, otherwise ifstream will attempt to interpret some of the file contents as carriage return and linefeed characters. See this question for more information.)
So the few lines of code in your minimal example that read from the .bin file should look like this:
vector<vector<float>> testInBinary;
{
std::ifstream is("CarveObj_k5_rgbThreshold10_triangleCameraMatches.bin", ios::binary);
cereal::BinaryInputArchive iarchive(is);
iarchive(testInBinary);
}
However, even after this is fixed there does also seem to be another problem with the data in that particular .bin file, as when I try to read it I get a different exception thrown, seemingly arising from an incorrectly encoded size value. I don't know if this is an artefact of copying to/from Dropbox though.
There doesn't seem to be a fundamental 100MB limit on Cereal binary files. The following minimal example creates a binary file of around 256MB and reads it back fine:
#include <iostream>
#include <fstream>
#include <vector>
#include <cereal/types/vector.hpp>
#include <cereal/types/memory.hpp>
#include <cereal/archives/xml.hpp>
#include <cereal/archives/binary.hpp>
using namespace std;
int main(int argc, char* argv[])
{
vector<vector<double>> test;
test.resize(32768, vector<double>(1024, -1.2345));
{
std::ofstream outFile("test.bin");
cereal::BinaryOutputArchive oarchive(outFile, ios::binary);
oarchive(test);
}
vector<vector<double>> testInBinary;
{
std::ifstream is("test.bin", ios::binary);
cereal::BinaryInputArchive iarchive(is);
iarchive(testInBinary);
}
return 0;
}
It might be worth noting that in your example code on Dropbox, you're also missing the ios::binary flag on the ofstream constructor when you're writing the .bin file:
/// Produced by:
// {
// std::ofstream outFile(base + "_triangleCameraMatches.bin");
// cereal::BinaryOutputArchive oarchive(outFile);
// oarchive(m_triangleCameraMatches);
// }
It might be worth trying with the flag set. Hope some of this helps.

Linux alternative to _NSGetExecutablePath?

Is it possible to side-step _NSGetExecutablePath on Ubuntu Linux in place of a non-Apple specific approach?
I am trying to compile the following code on Ubuntu: https://github.com/Bohdan-Khomtchouk/HeatmapGenerator/blob/master/HeatmapGenerator2_Macintosh_OSX.cxx
As per this prior question that I asked: fatal error: mach-o/dyld.h: No such file or directory, I decided to comment out line 52 and am wondering if there is a general cross-platform (non-Apple specific) way that I can rewrite the code block of line 567 (the _NSGetExecutablePath block) in a manner that is non-Apple specific.
Alen Stojanov's answer to Programmatically retrieving the absolute path of an OS X command-line app and also How do you determine the full path of the currently running executable in go? gave me some ideas on where to start but I want to make certain that I am on the right track here before I go about doing this.
Is there a way to modify _NSGetExecutablePath to be compatible with Ubuntu Linux?
Currently, I am experiencing the following compiler error:
HeatmapGenerator_Macintosh_OSX.cxx:568:13: error: use of undeclared identifier
'_NSGetExecutablePath'
if (_NSGetExecutablePath(path, &size) == 0)
Basic idea how to do it in a way that should be portable across POSIX systems:
#define _XOPEN_SOURCE 500
#include <stdio.h>
#include <limits.h>
#include <stdlib.h>
static char *path;
const char *appPath(void)
{
return path;
}
static void cleanup()
{
free(path);
}
int main(int argc, char **argv)
{
path = realpath(argv[0], 0);
if (!path)
{
perror("realpath");
return 1;
}
atexit(&cleanup);
printf("App path: %s\n", appPath());
return 0;
}
You can define an own module for it, just pass it argv[0] and export the appPath() function from a header.
edit: replaced exported variable by accessor method

ELF weak import / fallback stubs for glibc functions

I am trying to make our program runnable on some old Linux versions. One common import that prevents it is __longjmp_chk, added in glibc 2.11 but missing in older ones. One "solution" is to use -D_FORTIFY_SOURCE=0 but this turns off other fortify functions (__printf_chk etc) which are present in the target libc. Is there a way to make __longjmp_chk a "weak import" which would use the function from libc.so.6 if present, and fall back to local stub if not?
Is there a way to make __longjmp_chk a "weak import" which would use
the function from libc.so.6 if present, and fall back to local stub
if not?
I'd say yes, using dlsym() to check for __longjmp_chk and acting accordingly:
/* cc -ldl */
#define _GNU_SOURCE
#include <setjmp.h>
#include <stdio.h>
#include <dlfcn.h>
void __longjmp_chk(sigjmp_buf env, int val)
{
void (*p)(sigjmp_buf, int) = dlsym(RTLD_NEXT, "__longjmp_chk");
if (p)
printf("use the function from libc\n"),
p(env, val);
else
{
printf("falling back to local stub\n");
/* local stub - whatever that may be */
}
}
main()
{ // try it
sigjmp_buf env;
while (!setjmp(env)) __longjmp_chk(env, 1);
return 0;
}
I am trying to make our program runnable on some old Linux versions.
There are only a few ways to make this work, and most of them are enumerated here.
Is there a way to make __longjmp_chk a "weak import".
No.

How to handle V8 engine crash when process runs out of memory

Both node console and Qt5's V8-based QJSEngine can be crashed by the following code:
a = []; for (;;) { a.push("hello"); }
node's output before crash:
FATAL ERROR: JS Allocation failed - process out of memory
QJSEngine's output before crash:
#
# Fatal error in JS
# Allocation failed - process out of memory
#
If I run my QJSEngine test app (see below) under a debugger, it shows a v8::internal::OS::DebugBreak call inside V8 code. If I wrap the code calling QJSEngine::evaluate into __try-__except (SEH), then the app won't crash, but this solution is Windows-specific.
Question: Is there a way to handle v8::internal::OS::DebugBreak in a platform-independent way in node and Qt applications?
=== QJSEngine test code ===
Development environment: QtCreator with Qt5 and Windows SDK 7.1, on Windows XP SP3
QJSEngineTest.pro:
TEMPLATE = app
QT -= gui
QT += core qml
CONFIG -= app_bundle
CONFIG += console
SOURCES += main.cpp
TARGET = QJSEngineTest
main.cpp without SEH (this will crash):
#include <QtQml/QJSEngine>
int main(int, char**)
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
return 0;
}
main.cpp with SEH (this won't crash, outputs "Fatal exception"):
#include <QtQml/QJSEngine>
#include <Windows.h>
void runTest()
{
try {
QJSEngine engine;
QJSValue value = engine.evaluate("a = []; for (;;) { a.push('hello'); }");
qDebug(value.isError() ? "Error" : value.toString().toStdString().c_str());
} catch (...) {
qDebug("Exception");
}
}
int main(int, char**)
{
__try {
runTest();
} __except(EXCEPTION_EXECUTE_HANDLER) {
qDebug("Fatal exception");
}
return 0;
}
I don't believe there's a cross-platform way to trap V8 fatal errors, but even if there were, or if there were some way to trap them on all the platforms you care about, I'm not sure what that would buy you.
The problem is that V8 uses a global flag that records whether a fatal error has occurred. Once that flag is set, V8 will reject any attempt to create new JavaScript contexts, so there's no point in continuing anyway. Try executing some benign JavaScript code after catching the initial fatal error. If I'm right, you'll get another fatal error right away.
In my opinion the right thing would be for Node and Qt to configure V8 to not raise fatal errors in the first place. Now that V8 supports isolates and memory constraints, process-killing fatal errors are no longer appropriate. Unfortunately it looks like V8's error handling code does not yet fully support those newer features, and still operates with the assumption that out-of-memory conditions are always unrecoverable.

Error : Expected a declaration

I am writing a code in Visual C++ to access serial port.
Code is given below:-
#include<stdio.h>
#include<cstring>
#include<string.h>
#include<conio.h>
#include<iostream>
using namespace std;
//#include "stdafx.h"
#ifndef __CAPSTONE_CROSS_SERIAL_PORT__
#define __CAPSTONE_CROSS_SERIAL_PORT__
HANDLE hSerial= CreateFile(L"COM1", GENERIC_READ | GENERIC_WRITE,0,0,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,0);
if(hSerial==INVALID_HANDLE_VALUE)
{
if(GetLastError()==ERROR_FILE_NOT_FOUND){
//serial port does not exist. Inform user.
}
//some other error occurred. Inform user.
}
In the above code I am getting error at if in line
if(hserial==INVALID_HANDLE_VALUE)
Error is given below:-
Error:expected a declaration
I am getting same error at both braces } at the end of if statement
I want to know why I am getting this error and how to resolve it
I think you may want to read this. The problem is you are trying to use an if statement at namespace scope (global namespace) where only a declaration is valid.
You will need to wrap your logic in a function of some kind.
void mySuperCoolFunction()
{
if(hSerial==INVALID_HANDLE_VALUE)
{
if(GetLastError()==ERROR_FILE_NOT_FOUND)
{
//serial port does not exist. Inform user.
}
//some other error occurred. Inform user.
}
}

Resources