Conditional compilation in Haskell other than using CPP - haskell

The CPP extensions allows conditional compilation, e.g.
{-# LANGUAGE CPP #-}
#ifdef DEBUG
-- some debug code
#endif
It works fine, of course, but it's quite clumsy and non-idiomatic. Is there really no other mechanism to achieve conditional compilation?
(The specific case where I really would like to use it is the Text.Megaparsec.Debug.dbg function. The parse trail it produces is really useful, but the source code gets littered with #ifdef...#endif noise which makes it all rather unreadable. A wrapper function at the top would remove most of the noise, but I'm wondering nonetheless.)

A lightweight solution is to only use CPP once to define a boolean which can then be used in regular Haskell code:
#ifdef DEBUG
#define debug True
#else
#define debug False
#fi
or a macro if you don't even want the debug code to go through typechecking.
Another way to do conditional compilation without CPP is to change the source of modules at the package level, though I don't know any real example of this.
Create two modules with the same name debug/Debug.hs and nodebug/Debug.hs, both exporting, for example, a boolean debug :: Bool.
In the package configuration, add a flag to select between debug/ and nodebug/.
flag debug
description: debug mode
default: False
manual: True
library
...
if flag(debug)
hs-source-dirs: debug
else
hs-source-dirs: nodebug
Now you can build the library with -f +debug to enable debugging.

Related

Haddock breaks down on #if #else #endif clauses

I am trying to generate documentations for a github library using haddock. Here's the code I entered:
$ find -name '*.hs' | xargs haddock --html -o docs
src/Reflex/Dom/Xhr.hs:154:0:
error: missing binary operator before token "("
#if MIN_VERSION_aeson(1,0,0)
^
Then I looked up the relevant section of my source code Xhr.hs line 154:
import Data.Aeson
#if MIN_VERSION_aeson(1,0,0)
import Data.Aeson.Text
#else
import Data.Aeson.Encode
#endif
I didn't know #if, #else and #endif were part of Haskell but I could guess the meaning. Depending on the version, the code should import either Aeson.Text or Aeson.Encode. Just in case, I looked up the version:
$ ghc-pkg list | grep aeson
aeson-0.11.3.0
This was enough to give haddock difficulty. The info pages get sent to a folder called docs which contains a few empty html files waiting to be populated with the details of the Reflex.Dom library.
That code uses -cpp. Preprocessor directives are not part of the usual Haskell language. In order to correctly parse that code, you need to specify additional options to Haddock:
2.1. Using literate or pre-processed source
Since Haddock uses GHC internally, both plain and literate Haskell sources are accepted without the need for the user to do anything. To use the C pre-processor, however, the user must pass the the -cpp option to GHC using --optghc. [emphasis mine]
There are two problems, though. C preprocessor macro expansion you've posted will only work with GHC 8.0 (or later) if the package is exposed. It should, however, work with cabal haddock or stack haddock regardless of GHC's version. The latter variants are recommended if you try to build the documentation of a cabal/stack package, by the way.
If you still know what you're doing, use haddock --optghc=-cpp.

Different server and client dependencies with haste

I'm building a small haste project where I want to use Elasticsearch. However, bloodhound which seems like the library to go for for elasticsearch in haskell depends indirectly on template-haskell - which isn't supported by haste. Now, I don't need to call elastic from the client, so I don't need bloodhound in haste, but I need to be able to call it from within the same code base as haste is built to use the same code for server and client side. I guess I somehow could have separate client and server side implementations but I really like the haste way.
How can I have calls to dependencies that only exist on the server side in haste?
Preprocessor can be used for this purpose. Haste defines __HASTE__ macro so it should be enough to wrap your code in conditional statement:
{-# LANGUAGE CPP #-}
main = do
#ifdef __HASTE__
print "haste!"
#endif
#ifndef __HASTE__
print "not haste!"
#endif
print "everybody"
Don’t forget to enable C preprocessor extension using {-# LANGUAGE CPP #-} pragma.
You can also achieve similar effect in your ‘.cabal’ file:
Build-Depends:
bytestring >= 0.9.2.1
if flag(haste-inst)
Build-Depends:
base == 4.6.0.1,
array == 0.4.0.1
else
Build-Depends:
base,
array,
random,
websockets >= 0.8
(source https://github.com/valderman/haste-compiler/blob/0.4/libraries/haste-lib/haste-lib.cabal#L63)
Note that the haste-inst flag has been renamed to haste-cabal in the latest development version of Haste.
A potential solution I've thought about is to import a "Shared" module with to different implementations, one client/Shared.hs and one server/Shared.hs and then include one of the implementations using the -i option. So -iclient for haste and -iserver for ghc.
I can't test this at the moment though so I'll have to get back to it.

where to defind DEBUG symbol for Debug build in VS2012?

I've Win32 DLL application. Debug build is selected. I wrote such code:
#if DEBUG
fflush(logFile);
#endif
But fflush(logFile); is grayed out so I assume it will not be executed.
But i want it to be executed. Does it mean that in Debug DEBUG symbol is not defined? Where can I define it in VS2012?
Preprocessor definitions are defined under project settings as shown on screenshot (note _DEBUG there):
Note that in case of _DEBUG you want to check if it is defined at all, and not compare it (possibly missing definition) to zero. You want:
#if defined(_DEBUG)
or
#ifdef _DEBUG
By default, a Visual Studio C++ project will have the macro _DEBUG defined for a Debug project configuration. It will have the value 1, so you can test for it using #if or #ifdef.
But note that the macro starts with an underscore - if you want to use the name DEBUG (maybe you have existing code that uses that name), you'll need to add it to the project properties yourself (C/C++ | Preprocessor | Preprocessor definitions). Or you can put the following in a header that's included in every translation unit (maybe stdafx.h):
#if _DEBUG
#undef DEBUG
#define DEBUG 1
#endif
Every project has two builds: Debug and Release. Debug build have DEBUG defined, as if you defined using:
#define DEBUG
It enables, the code to get generated differently. The writers of code (functions, classes etc), may add additional diagonistics to aid in debugging. The Debug build is for debugging only, and you don't give this build (i.e. EXE generated with Debug build), to the customers.
Another build where DEBUG symbols is not defined, is for Release build. A release build is for optmized code, at code level, compiler setting level, and linker level. Most of diagonistic, asserts, debugging-aid feature will be disabled - so as to produce optimized executable.
Whomsoever who has written the above code, has written the same thing in mind. To let flush the file, only if debug build is running. You can comment the #if and #endif, and let fflush line compiled, or you can use Release build. It all depends on you.

Where can I learn about #ifdef?

I see this used often to make modules compatible with GHC and Hugs, but google is not helping me learn more about it.
What can I put inside the conditional? Can I make parts of a module conditional on what version of 'base' is in use?
EDIT 3/2017: This is a great resource: https://guide.aelve.com/haskell/cpp-vww0qd72
The GHC documentation has a section relating to the C pre-processor that documents some of the predefined
pre-processor macros.
The Cabal documentation has a section relating to conditional compilation that gives an example relating to base. If you are writing a portable package, you should be using Cabal, anyway.
In addition to the very useful flags defined by GHC (OS, architecture, etc), when using cabal other flags and macros are defined.
Check Package Versions
Here's a use from crypto-api that checks the version of the tagged package being used:
#if MIN_VERSION_tagged(0,2,0)
import Data.Proxy
#endif
Custom CPP Defines Based on Cabal Flags
You can define CPP symbols dependent on cabal flags. Here's an (unnecessarily complex) example from pureMD5 (from the .cabal file):
if arch(i386) || arch(x86_64)
cpp-options: -DFastWordExtract
Inside the .hs module you can then use #ifdef, for example:
#ifdef FastWordExtract
getNthWord n b = inlinePerformIO (unsafeUseAsCString b (flip peekElemOff n . castPtr))
#else
... other code ...
#endif
For more information you can see the Cabal users guide. This page has the "conditional compilation" information you're probably looking for.
#ifdef and friends are used by the C preprocessor (CPP). They provide a way to compile code conditionally. You can enable the use of the CPP by adding the pragma {-# LANGUAGE CPP #-} on top of a file.
Many programs that deal with Haskell code set some macros for the preprocessor (eg. GHC sets __GLASGOW_HASKELL__ to the version of GHC), so one can conditionally compile code, for instance to use different properitary libraries for Hugs and GHC.
If you run your Haskell compiler with the -cpp option, it will first preprocess the source files with the CPP (C Pre Processor).
Take a look at the section 4.11.3. Options affecting the C pre-processor here.

Building Visual C++ app that doesn't use CRT functions still references some

This is part of a series of at least two closely related, but distinct questions. I hope I'm doing the right thing by asking them separately.
I'm trying to get my Visual C++ 2008 app to work without the C Runtime Library. It's a Win32 GUI app without MFC or other fancy stuff, just plain Windows API.
So I set Project Properties -> Configuration -> C/C++ -> Advanced -> Omit Default Library Names to Yes (compiler flag /Zl) and rebuilt. Let's pretend I have written a suitable entry point function, which is the subject of my other question.
I get two linker errors; they are probably related. The linker complains about unresolved external symbols __fltused and _memcpy in foobar.obj. Needless to say, I use neither explicitly in my program, but I do use memcpy somewhere in foobar.cpp. (I would have used CopyMemory but that turns out to be #defined to be identical to memcpy...)
(I thought I could get rid of the memcpy problem by using a compiler intrinsic, like #pragma intrinsic(memcpy), but this makes no difference.)
If I look at the preprocessor output (adding /P to the compiler command line), I see no references to either __fltused or _memcpy in foobar.i.
So, my question is: Where do these linker errors come from, and how do I resolve them?
__fltused implies you are using or have at least declared some floats or doubles. The compiler injects this 'useless' symbol to cause a floating support .obj to get loaded from the crt. You can get around this by simply declaring a symbol with the name
#ifdef __cplusplus
extern "C" {
#endif
int _fltused=0; // it should be a single underscore since the double one is the mangled name
#ifdef __cplusplus
}
#endif
WRT _memcpy - memcpy is a __cdecl function, and all cdecl functions get an automatic _ as part of their decoration. so, when you say "__cdecl memcpy" - the compiler & linker go looking for a symbol called '_memcpy'. Intrinsic functions - even explicitly requested - can still be imported if the build settings have debug settings that contra-indicate intrinsics. So you are going to need to implement your own memcpy and related functions at some point anyway.
I recommend setting the "generate assembly listing" (or some such) compiler option for foobar.cpp once, and then inspecting the assembler code. This should really tell you where these symbols are used.

Resources