Is there a better way to write an autoconf test for a missing prototype than by setting CFLAGS to "-Werror -Wimplicit-function-declaration" ?
Specifically, I'm trying to determine if I need to provide my own pwrite(2)
and pread(2). If the environment is strict, pread/pwrite are not defined.
here's what I have now, which works:
AC_INIT([pwrite],[0.0.0],[none],[nothing],[nowhere])
AC_CONFIG_HEADERS([config.h])
old_CFLAGS=$CFLAGS
CFLAGS="-Werror $CFLAGS"
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(,[
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
int main(int argc, char **argv) {
int ret = pwrite(99, "blah", 1, 0);
return 0;
} ]) ],
AC_MSG_RESULT([using system pwrite prototype])
AC_DEFINE(HAVE_PWRITE, 1, [pwrite protoype exists]),
AC_MSG_RESULT([no pwrite protoype. using our own])
)
CFLAGS=$old_CFLAGS
AC_OUTPUT()
When I do this, configure CFLAGS=-std=c99 will indeed detect that pwrite is declared implicitly, and configure alone will find a pwrite prototype in unistd.h. However, mucking with CFLAGS inside configure doesn't seem like the "autoconf-y" way to do this.
If you look at the source of the autoconf macros you find that a lot of them save and restore CFLAGS. You need to be very careful using -Werror though, as you might get incorrect results. e.g., if argc, argv are unused - as is ret - a warning (see: -Wunused* flags) will be interpreted as pwrite being unavailable.
Assuming <unistd.h> compiles without warning-as-errors, which it should:
<save CFLAGS>
CFLAGS="$CFLAGS -Werror=implicit-function-declaration"
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[[#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif]],
[[(void) pwrite(99, "blah", 1, 0);]])],
<action-on-success>,
<action-on-fail>)
<restore CFLAGS>
The (void) cast is probably unnecessary - it's for crazy-strict warnings that will probably not be silent even for system headers, but doesn't hurt. It might be worth looking at the _XOPEN_SOURCE macro value - e.g., setting _XOPEN_SOURCE in this test and the library code.
Saving/restoring CFLAGS is acceptable but for this particular purpose, AC_CHECK_DECLS turns out to be precisely what I was looking for, and furthermore does not have any problems with super-picky compilers or trying to figure out what is the Portland Group compiler equivalent to -Werror-implicit-function-declaration.
AC_INIT([pwrite],[0.0.0],[none],[nothing],[nowhere])
AC_CONFIG_HEADERS([config.h])
AC_CHECK_HEADERS([unistd.h])
AC_CHECK_DECLS([pwrite])
AC_OUTPUT()
and then in my code I do have to check the result a little differently:
#if (HAVE_DECL_PWRITE == 0)
... implement our own pwrite
#endif
Related
This question already has answers here:
Is there a way to tell GCC not to optimise a particular piece of code?
(3 answers)
Closed last year.
This is a code in linux (5.4.21)
When I use a virtual machine and connect gdb to the linux process, I can use break points and follow code. For example, I set breakpoint on a function arm_smmu_device_probe. When I follow with 'next' command, I see some values, for example, 'smmu' or 'dev' below are shown to have been optimized out. How can I make them not optimized out so that I can see them in gdb?
static int arm_smmu_device_probe(struct platform_device *pdev)
{
int irq, ret;
struct resource *res;
resource_size_t ioaddr;
struct arm_smmu_device *smmu;
struct device *dev = &pdev->dev;
bool bypass;
smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
if (!smmu) {
dev_err(dev, "failed to allocate arm_smmu_device\n");
return -ENOMEM;
}
smmu->dev = dev;
if (dev->of_node) {
ret = arm_smmu_device_dt_probe(pdev, smmu);
} else {
ret = arm_smmu_device_acpi_probe(pdev, smmu);
if (ret == -ENODEV)
return ret;
}
I tried chaning -O2 to -Og in the top Makefile but the kernel build fails then.
Recently I found how to do this. (from Is there a way to tell GCC not to optimise a particular piece of code?, flolo's answer)
If you want a function aaa(...) not to be optimzed, you can do it like this.
#pragma GCC push_options
#pragma GCC optimize ("O0")
aaa ( ... )
{
function body
}
#prgma GCC pop_options
In some cases, this putting #pragma causes some discrepancy between the #include header file and the function source. So in this case (not often) you need to add this #praga around the corresponding #include statement. If linux/bbb.h causes this kind of problem, do this.
#pragma GCC push_options
#pragma GCC optimize ("O0")
#include <linux/bbb.h>
#pragma GCC pop_options
This works sure and I'm enjoying(?) debug/analysis this way.
In this piece of code from Wesnoth build the $TESTFILE variable is substituted with the given path. But on Windows path becomes invalid, because by default SCons subst() doesn't escape backslashes in paths. Is there a way to do this - get absolute filename for SCons File node with escaped backslashes? Or escape backslashes while substituting?
test_program = '''
#include <SDL_mixer.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
Mix_Music* music = Mix_LoadMUS("$TESTFILE");
if (music == NULL) {
exit(1);
}
exit(0);
}
\n
'''
print Environment(TESTFILE = File("data/core/music/main_menu.ogg").rfile().abspath). \
subst(test_program)
The output:
#include <SDL_mixer.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
Mix_Music* music = Mix_LoadMUS("E:\wesnoth\scons\data\core\music\main_menu.ogg");
if (music == NULL) {
exit(1);
}
exit(0);
}
How about os.path.normpath from python 2.7 docs..?
os.path.normpath(path)
Normalize a pathname by collapsing redundant
separators and up-level references so that A//B, A/B/, A/./B and
A/foo/../B all become A/B. This string manipulation may change the
meaning of a path that contains symbolic links. On Windows, it
converts forward slashes to backward slashes. To normalize case, use
normcase().
After cloning the Wesnoth repo and inspecting the actual build files, I found that the problem you describe happens in the context of configuration. This doesn't get clear in your original question, and renders my first try of an answer useless (using the Substfile builder, see history).
As far as I know, there is currently no option built into SCons to handle the double-backslashing you're looking for. The cleanest way I can think of right now would be to clone the method SConf.SConfBase.TryRun (e.g. name it TryRunWithArgs), make it accept additional program arguments, add it to the configure context with AddTest() and then rewrite the test program such that it accepts the filename as first argument.
Im writing a shared library for a FreeBSD application.
This library gets loaded by LD_PRELOAD.
This application has multiple compile-versions, so some function offsets might change and my library wont work there.
Now i want to read the offsets at loading the library.
The offsets are changing, so i think my only way is to read the offsets of specific function names.
The offsets are simply the offsets of functions or labels.
Now the problem - how to do it?
Example
In the first version, i call the main version like that:
int(*main)(int argc, char *argv[])=(int(*)(int,char*[]))0x081F3XXX;
but in the second, the offset has changed:
int(*main)(int argc, char *argv[])=(int(*)(int,char*[]))0x08233XXX;
Programmers (me) are lazy and don't want to compile their libs for every version.. I want to create a lib, that is for every version!
I simply need the offsets of the functions via function name, the rest is no problem..
Thats how i call the library:
LD_PRELOAD="/path/to/library.so" ./executable
or
env LD_PRELOAD="/path/to/library.so" ./executable
Edit with test code
Here my testcode regarding to the comments:
Main.cpp:
#include <stdio.h>
void test() {
printf("Test done.\n");
}
int main(int argc, char * argv[]) {
printf("Program started\n");
test();
}
lib.cpp
#include <stdio.h>
#include <dlfcn.h>
void __attribute__ ((constructor)) my_load(void);
void my_load(void) {
printf("Library loaded\n");
printf("test - offset: 0x%x\n",dlsym(NULL,"test"));
}
test.sh
g++ main.cpp -o program
g++ -shared lib.cpp -o lib.so
env LD_PRELOAD="lib.so" ./program
-> Result:
Library loaded
test - offset: 0x0
Program started
Test done.
Does not seem as would it work :s
Edit 15:45
printf("test - offset: 0x%x\n",dlsym(dlopen("/home/test/test_proc/program",RTLD_GLOBAL),"test"));
This also does not work.. Maybe dlsym is the wrong way?
I reproduced your program on Mac OS X using Clang, and found a solution. First, the boring parts:
To make it compile cleanly I had to change your %x format specifier to %p for the pointer.
Then, on Mac OS X I had to pass RTLD_MAIN_ONLY as the first argument to dlsym(). I guess this is platform-dependent; on Linux it does seem to be NULL as you have.
Now, the meat of the fix!
You're searching with dlsym() for a symbol called test. But there is no such symbol in your application. Why? Because you're using C++, and C++ does "name mangling." You could use any number of tools to figure out the mangled name and try to load that with dlsym(), but it could change with different compilers. So instead, just inhibit name mangling by enclosing your test() function in extern "C":
extern "C" {
void test() {
printf("Test done.\n");
}
}
This fixed it for me:
$ DYLD_INSERT_LIBRARIES=lib.so ./program
Library loaded
test - offset: 0x1027d1eb0
Program started
Test done.
Why NTDDI_VERSION macro changes its value from cpp it includes to ntdddisk.h ?
I am using Visual Studio 2012 with cumulative update 4, and building on Windows 7 x64.
In one CPP i need to call new IOCTL_ .. for WIN 8.
In the CPP there is #include
ntdddisk.h defines the new IOCTL_ for WIN 8 under the guarded condition:
#if (NTDDI_VERSION >= NTDDI_WIN8)
...
#endif
Inside that cpp the NTDDI_VERSION macro has value NTDDI_WIN8 (as expected result from include sdkddkver.h and compilation with /D_WIN32_WINNT=0x0602)
However, in ntdddisk.h the value for NTDDI_VERSION macro has value < NTDDI_VISTA, that is, less than NTDDI_WIN8
Compilation fails with error
error C2065: 'IOCTL_..' : undeclared identifier
Looks like a bug unless i miss something else. Thoughts?
Details are:
In the CPP file there are these includes
#pragma once
// Needed for new IOCTL_ for WIN 8
#include <sdkddkver.h>
#include <windows.h>
// Check NTDDI_VERSION ...
#if (NTDDI_VERSION >= NTDDI_WIN8)
// Value is NTDDI_WIN8 as expected
// #include <TROUBLE.h>
#endif
#pragma pack(8)
#include <ntdddisk.h>
#include <ntddscsi.h>
#include <lm.h>
#include <objbase.h>
/*=== IMPORTANT: this struct needs to have 8-byte packing ===*/
typedef struct _SCSI_PASS_THROUGH_WITH_BUFFERS {
SCSI_PASS_THROUGH spt;
ULONG Filler; // realign buffers to double word boundary
UCHAR SenseBuf[32];
UCHAR DataBuf[512];
} SCSI_PASS_THROUGH_WITH_BUFFERS;
#pragma pack()
Compilation with CL has these parameters including with -D_WIN32_WINNT=0x0602
cl -nologo #COMPL.TMP /Fo..\\..\\..\\optimized\\obj\\x86\\CPP.obj CPP.cpp
COMPL.TMP contains
/I*** application-headers ***
-D_AFXDLL -c -D_CRT_SECURE_NO_DEPRECATE -D_CRT_NONSTDC_NO_DEPRECATE -DBTREEDB -O2 -Ox -MD -Zi -DNT_CLIENT -DWIN32 -D"_CONSOLE" -D_THREADS -D_OPSYS_TYPE=DS_WINNT -DPSAPI_VERSION=1 -D_WIN32_WINNT=0x0602 -TP -DMBCS=1 -D_LONG_LONG=1 -D_DSM_VLK_BTREE -DDSM_WIDECHAR -D_UNICODE -DUNICODE -DUSE_XML=1 -DXMLUTIL_EXPORTS=1 -DUSE_XERCES_2_8=1 -DPEGASUS_PLATFORM_WIN32_IX86_MSVC=1 -DPEGASUS_USE_EXPERIMENTAL_INTERFACES -Zp1 -D_DSM_LONG_NAME -W3 -EHsc -GF
The problem isn't with the _WIN32_WINNT or NTDDI_VERSION macros.
The problem is that windows.h indirectly includes winioctl.h which has the following curious couple of lines about halfway through:
#ifndef _NTDDDISK_H_
#define _NTDDDISK_H_
Unsurprisingly, ntdddisk.h starts with those very same lines and therefore is effectively not included at all.
I couldn't easily come up with a combination or ordering of headers that would work around this problem - I think it's something that MS really needs to fix.
However, the following terrible workaround (that I really don't suggest, unless you can't get any help from MS) seemed to get the compiler to actually process ntdddisk.h:
#define _NTDDDISK_H_
#include <windows.h>
#undef _NTDDDISK_H_
But, I suspect there may be other problems that might pop up as a result of this hack - so if you decide to use it, please test carefully.
I am not sure that this is what i need, but the compilation worked after inserting
#define _NTDDDISK_H_
#include <windows.h>
...
#undef _NTDDDISK_H_
#include <ntdddisk.h>
Thanks for suggestion.
Years ago, I used to do some basic programming in C. Now I am attempting to relearn what I have forgotten as well as learn Visual C++. I am confused though by all the string options and now the extra layer of trying to make my programs Unicode compatible. I have been reading Beginning Visual C++ 2010 as well as online reading to learn this information.
As an exercise I am writing a very basic program that asks a user to input some text and then display that text in the form of a messagebox. The program works, but my way of getting it to work was more through guesswork and looking at other examples than truly understanding why I need to convert the various strings into different types.
The code is:
#include "stdafx.h"
#include <iostream>
#include <string>
#include "Windows.h"
using std::wcin;
using std::wcout;
using std::wstring;
int _tmain(int argc, _TCHAR* argv[])
{
wstring myInput;
wcout << "Enter a string: ";
getline(wcin, myInput);
MessageBoxW(NULL, myInput.c_str(), _T("Test MessageBox"), 64);
return 0;
}
The MessageBox syntax is:
int WINAPI MessageBox(
__in_opt HWND hWnd,
__in_opt LPCTSTR lpText,
__in_opt LPCTSTR lpCaption,
__in UINT uType
);
On the other hand, if I just use the command line argument as the text of the messagebox, I do not need to convert the string at all and I am not sure why.
#include "stdafx.h"
#include <iostream>
#include <string>
#include "Windows.h"
using std::wcout;
int _tmain(int argc, _TCHAR* argv[])
{
MessageBoxW(NULL, argv[1], _T("Test MessageBox"), 64);
return 0;
}
My confusion is:
Why do I need to use the c_str() for argument 2 to MessageBoxW and why do I need to use the _T() macro (?) in argument 3?
Why did the program work in the second code example without doing some sort of conversion?
What exactly does LPCTSTR mean? I see another variant in MSDN functions called LPTSTR.
Thanks!
1) .c_str() is a standard C++ method to convert from C++ strings to C strings. _tmain, _T('x'), _T("text") and _TCHAR are (somewhat ugly) Microsoft macros that make your program compile either in unicode or non-unicode mode. There's a global setting in the project options that set some macros to configure your project in one of these two modes.
If you are in non-unicode mode (referred to as ANSI mode in MS's documentation) the macros expand to:
main, 'x', "text", char
If you are in unicode mode, the macros expand to
wmain, L'x', L"text", wchar_t
2) and 3) Windows headers are full of typedefs and macros like that. Sometimes they make code more obscure thant it needs to be. In general, LP means pointer (long pointer, i guess, but it's been a while since we needed to distinguish between near and far pointers), C means "const", T means that it will be either char or wchar_t depending on project settings and STR is obviously "string". After all, it's a plain C type, that's why you can pass C strings to them without conversion.
The MessageBoxW function is expecting a C-style wide-character string (WCHAR ). The macro _L() alters your string so that it's Unicode compatible (WCHAR instead of char*).
argv[] doesn't do objects, so you're already getting a WCHAR pointer out of it.
LPCTSTR is basically a WINAPI typedef for const char * or const WCHAR*, depending on whether you are building as UNICODE. Also see this post: LPCSTR, LPCTSTR and LPTSTR
In short, your main function is being passed WCHAR* strings and MessageBoxW expects WCHAR* strings.