Equivalent of $(#D) in NMake (batch) inference rules - nmake

I have a bunch of inference rules which I could simplify a lot if I had a way to express directory of current target. Normally one would use $(#D) for the purpose, but it doesn't seem to work for (batch) inference rules, i.e. the ones ending in :::
fatal error U1100: macro '$(#D)' is illegal in the context of batch rule [...]
Is there an alternative method which is usable in inference rules and would allow me to keep the number of overall rules down and - in particular - not having to repeat myself with something that in another context would become an argument to a function call.
My current workaround is simply to use non-batch inference rules. So instead of the batch form:
{foo}.c{bar}.obj::
$(CC) ...
I am using the non-batch form, which means it invokes the compiler once per file instead of once per "bunch of files matching the inference rule":
{foo}.c{bar}.obj:
$(CC) ...
It also makes the log files quite a lot longer and quite a lot less readable, but it primarily slows down builds.

Related

Fortran internal write suddenly an error according to Intel compiler

A Fortran code I am working on has several lines similar to
WRITE(filename, '(A16,"_",I4,".dat")') filename, indx
This code has been successfully compiled and run literally hundreds of times, on many different platforms and pretty much all major compilers. But suddenly the newest (or, new, anyway) Intel compiler doesn't like it. It gives a warning message "forrtl: .... Internal file write-to-self; undefined results". After this line executes, "filename", which was a reasonable character array, becomes blank.
I suppose the problem is that filename is both an input into the write and the destination of the internal write. The fix is easy enough. It works to replace filename as the destination with something like filename_tmp. But as I have said, this has never been necessary until now.
So I am wondering, does filename as both an input and destination violate the Fortran standard, but all these compilers have been turning a blind eye to it for all these years, and now Intel is getting strict? Or is Intel being "snobbish"? Or outright buggy?
Execution1 of the write statement of the question has always been explicitly prohibited.
We currently see (F2018 12.6.4.5.1 p7):
During the execution of an output statement that specifies an internal file, no part of that internal file shall be referenced, defined, or become undefined as the result of evaluating any output list item.
filename is an internal file, and the evaluation of the output list item filename is a reference to that internal file.
This is not a programming violation that the compiler is required to detect, so you can view this as a case of improved diagnostic capability/pickiness of the compiler as you desire. No Fortran program is harmed by the change in this behaviour of the compiler.
Fortran 66 didn't have internal files (or character types), of course, and Fortrans 77, 90 and 95 used different words for the same effect (see for example, F90 9.4.4):
If an internal file has been specified, an input/output list item must not be in the file or associated with the file.
In case it looks like this is more restrictive, from Fortran 2003 the restrictions for input and output statements are stated separately (only output was quoted above, p8 for input).
1 Note the use of execution: there's nothing wrong with the statement itself as a statement. It is allowed to exist in source code that isn't reached. Checking this statement when compiling is not a simple matter.

How to implement source map in a compiler?

I'm implementing a compiler compiling a source language to a target language (assembly like) in Haskell.
For debugging purpose, a source map is needed to map target language assembly instruction to its corresponding source position (line and column).
I've searched extensively compiler implementation, but none includes a source map.
Can anyone please point me in the right direction on how to generate a source map?
Code samples, books, etc. Haskell is preferred, other languages are also welcome.
Details depend on a compilation technique you're applying.
If you're doing it via a sequence of transforms of intermediate languages, as most sane compilers do these days, your options are following:
Annotate all intermediate representation (IR) nodes with source location information. Introduce special nodes for preserving variable names (they'll all go after you do, say, an SSA-transform, so you need to track their origins separately)
Inject tons of intrinsic function calls (see how it's done in LLVM IR) instead of annotating each node
Do a mixture of the above
The first option can even be done nearly automatically - if each transform preserves source location of an original node in all nodes it creates from it, you'd only have to manually change some non-trivial annotations.
Also you must keep in mind that some optimisations may render your source location information absolutely meaningless. E.g., a value numbering would collapse a number of similar expressions into one, probably preserving a source location information for one random origin. Same for rematerialisation.
With Haskell, the first approach will result in a lot of boilerplate in your ADT definitions and pattern matching, even if you sugar coat it with something like Scrap Your Boilerplate (SYB), so I'd recommend the second approach, which is extensively documented and nicely demonstrated in LLVM IR.

Allowing re-declaration of certain parameters inside package for simulation

I have a system that has some timeouts that are on the order of seconds, for the purpose of simulation i want to reduce these to micro- or milli-seconds.
I have these timeouts defined in terms of number of clock cycles of my FPGAs clock. So as an example
package time_pkg
parameter EXT_EN_SIG_TIMEOUT = 32'h12345678;
...
endpackage
I compare a counter against the constant global parameter EXT_EN_SIG_TIMEOUT to to determine if it is the right time to assert an enable signal.
I want have this parameter (as well as a bunch of others) defined in a package called time_pkg in a file called time_pkg.v and I want to use this package for synthesis.
But when I simulate my design in Riviera Pro (or Modelsim) i'd like to have a second parameter defined inside a file called time_pkg_sim.v that is imported after time_pkg.v and overwrites the parameters that share the same name as already defined in time_pkg.
If I simply make a time_pkg_sim.v with a package inside it with the same name (time_pkg) then Riviera complains since i'm trying to re-declare a package that's already been declared.
I don't particularly want to litter my hdl with statements to check if a simulation flag is set in order to decide whether to compare the counter against EXT_EN_SIG_TIMEOUT or EXT_EN_SIG_TIMEOUT_SIM
Is there a standard way to allow re-definition of paramters inside packages when using a simulation tool?
No, you can't override parameter in packages. What you can do is have two different filenames that declare the same package with different parameter values, and then choose which one to compile for simulation or synthesis.
It may be a better idea to have a massive ifdef with the simulator falg inside the package. That way your code would not be littered with ifdef everywhere, just concentrated in one place. Moreover, the code inside the modules itself would not need to change.

How do I compile Haskell programs using Shake

I have a Haskell program that I want to compile with GHC, orchestrated by the Shake build system. Which commands should I execute, and under what circumstances should they be rerun?
There are two approaches to doing the compilation, and two approaches to getting the dependencies. You need to pick one from each set (all 4 combinations make sense), to come up with a combined approach.
Compilation
You can either:
Call ghc -c on each file in turn, depending on the .hs file and any .hi files it transitively imports, generating both a .hi and .o file. At the end, call ghc -o depending on all the .o files. For actual code see this example.
OR Call ghc --make once, depending on all .hs files. For actual code see this example.
The advantage of ghc --make is that it is faster than multiple calls to ghc -c since GHC can load each .hi file only once, instead of once per command. Typically the speedup is 3x. The disadvantage is parallelism is harder (you can use -j to ghc --make, but Shake still assumes each action consumes one CPU), and that two ghc --make compilations can't both run at the same time if they overlap on any dependencies.
Dependencies
You can either:
Parse the Haskell files to find dependencies recursively. To parse a file you can either look for import statements (and perhaps #include statements) following a coding convention, or use a library such as haskell-src-exts. For actual code with a very approximate import parser see this example.
OR Use the output of ghc -M to detect the dependencies, which can be parsed using the Shake helper function parseMakefile. For actual code see this example.
The advantage of parsing the Haskell files is that it is possible to have generated Haskell files and it can be much quicker. The advantage of using ghc -M is that it is easier to support all GHC features.

How to use 'make' with GHC Dependency Generation

I've got a couple of (independent) files that take quite a while to compile, so I thought I would try out parallel compilation, per Don Stewart's answer here.
I followed the directions here, so my makefile looks something like
quickbuild:
ghc --make MyProg.hs -o MyProg
depend:
ghc -M -dep-makefile makefile MyProg
# DO NOT DELETE: Beginning of Haskell dependencies
...
MyProg.o : MyProg.hs
MyProg.o : B.hi
MyProg.o : C.hi
...
# DO NOT DELETE: End of Haskell dependenciesghc
(Note: contrary to the docs, GHC seems to default to "Makefile" rather than "makefile", even when "makefile" exists.)
My question is: How do I make quickbuild depend on any of the auto-gen dependencies (so that make will actually run in parallel)? I tried adding 'MyProg.o' to the dependency list of 'quickbuild', but 'make' (rightly) complained that there was no rule to build 'B.hi'.
I suggest not to use make for this kind of purpose.
Look at ghc-parmake and its issues, especially this one - GHC has a very sophisticated recompilation checker that you cannot replicate with Makefiles (it can detect e.g. if a package file outside of your own project changes).
You will also not receive a large speedup (in practice not > 2) from a parallel make -j for running multiple GHCs in parallel, since firing multiple GHCs has high startup overhead which is avoided by ghc --make. In particular, each new GHC invocation has to parse and typecheck all the interface .hi files involved in all dependencies of the module you are compiling; ghc --make caches them.
Instead, use the new ghc --make -j of GHC 7.8 - it is truly parallel.
It will be more reliable and less effort than your manually written Makefile, and do recompilation avoidance better than Make can do with its file time stamps.
On the first view, this sounds like a drawback of Haskell, but in fact it is not. In other languages that like to use make for building, say C++, it is impossible to notice when files outside of your project change; having a build system in the compiler itself like ghc --make allows to notice this.

Resources