I am writing documentation for a library using haddock, and, for reasons that are not really necessary to explain, I need to include a little code block in my documentation that looks like:
z(<>)
Importantly there can be no space between z and (<>). It may be a bit esoteric but
z (<>)
would make my documentation incorrect, even if it is more stylistically correct.
Now I believe that hyperlinks to both z and (<>) would be helpful. Neither has a very informative name, so a link that helps people remember their definitions and purpose is nice.
So my code without the hyperlinks looks like:
#z(<>)#
And to add hyperlinks I just use single quotes:
#'z''(<>)'#
Except that doesn't work, haddock sees 'z'' and thinks that I mean to link z' (a function that does exist in my module), and then just leaves the rest alone. The rendered output looks like
z'(<>)'
Now as an experiment I deleted the definition of z', however the only difference this makes is that the link to z' goes away. The raw text remains the same. The next thing I tried was ditching #s altogether and did
'z''(<>)'
Which also created a hyperlink to z' and left the rest untouched, the same problem as earlier except now nothing is in a code block.
How can I make a code block that links two functions without a space between?
You can separate the two functions into different code blocks. If there is no space between the code blocks, it will appear no different than a single code block. So
#'z'##'(<>)'#
will render as desired.
You can also do it in one code block by moving the 's inside of the parentheses to only surround <>.
#'z'('<>')#
This will render slightly differently with the parentheses not being part of any hyperlink, however this may be desired.
Here is an alternative solution to add to the answer you already provided:
You can mix and match ' and `. These two will also be rendered correctly by haddock:
-- | #`z`'(<>)'#
-- | #'z'`(<>)`#
At the same time I've tried your solution #'z'##'(<>)'# and for some reason it did not render for me properly, but with haddock you never know.
Here are all of the ones that I've tried:
-- * #'z'##'(<>)'#
-- * #'z'('<>')#
-- * #'z'`(<>)`#
-- * #`z`'(<>)'#
With corresponding result:
Related
I've got a file format that looks a little like this:
blockA {
uniqueName42 -> uniqueName aWord1 anotherWord "Some text"
anotherUniqueName -> uniqueName23 aWord2
blockB {
thing -> anotherThing
}
}
Lots more blocks with arbitrary nesting levels.
The lines with the arrow in them define relationships between two things. Each relationship has some optional metadata (multi-word quoted or single word unquoted).
The challenge I'm having is that because the there can be an arbitrary number of metadata items in a relationship my parser is treating anotherUniqueName as a metadata item from the first relationship rather than the start of the second relationship.
You can see this in the image below. The parser is only recognising one relationshipDeclaration when a second should start with StringLiteral: anotherUniqueName
The parser looks a bit like this:
block
: BLOCK LBRACE relationshipDeclaration* RBRACE
;
relationshipDeclaration
: StringLiteral? ARROW StringLiteral StringLiteral*
;
I'm hoping to avoid lexical modes because the fact that these relationships can appear almost anywhere in the file will leave me up to my eyes in NL+ :-(
Would appreciate any ideas on what options I have. Is there a way to look ahead, spot the '->', for example?
Thanks a million.
Your example certainly looks like the NL is what signals the end of a relationshipDeclaration.
If that's the case, then you'll need NLs to be tokens available to your parse rules so the parser can know recognize the end.
As you've alluded to, you could potentially use -> to trigger a different Lexer Mode and generate different tokens for content between the -> and the NL and then use those tokens in your parse rule for relationshipDeclaration.
If it's as simple as your snippet indicates, then just capturing RD_StringLiteral tokens in that lexical mode, would probably be easier to deal with than handling all the places you might need to allow for NL. This would be pretty simple as Lexer modes go.
(BTW you can use x+ to get the same effect as x x*)
relationshipDeclaration
: StringLiteral? ARROW RD_StringLiteral+
;
I don't think there's a third option for dealing with this.
Is it possible, when using IHaskell, to have all the output automatically processed by Latex, or understood as Markdown ?
Perhaps this will involve (at least if I want it to work with data of type MyType) using import IHaskell.Display and instance IHaskellDisplay MyType where... but I don't know how to make this work!
Thanks!
edit Someone asked for an example, so what I have in mind is: every string of output (for every output is a string, ultimately...) is processed as latex code (or markdown). If a function returns, say, an integer, the result will be barely visible, but if a function returns the string $\mathbb{Z}$ then what appears on the screen is
$\mathbb{Z}$
[alert ! I thought we had latex formulae on stackoverflow, just like we do in mathoverflow, but if we don't, you need your imagination here!...]
Ultimately i imagine I would have a class Latexable a where showlatex :: a -> String and I would implement showlatex for some types.
Well, I'm happy with various partial solutions, allowing me to have some formulae typeset directly in the notebook, it doesn't really matter whether all output is processed...
Here's a partial answer to my own question.
import IHaskell.Display (latex)
Then if you try, say
latex "$x+y$"
it works!
It remains to find a mechanism so that latex is automatically called in certain situations, so the question remains open. But in most cases, I'm good.
I would like to change the behavior of menhir's output in follwoing way:
I want it to look up all grammatical alternatives if it finds any, and put them in a list and get me back this ambigouus interpretation. It shall not reduce conflicts, just store them.
In the source code of menhir, it seems to me, that I have to look in "Engine.ml". The resultant syntactically determined token comes in a variant type item "Accepted v" as a state of a checkpoint of the grammatical automaton. This content is found by a function "accept env prod" before, that is part of a bundle of recursive functions, that change the states.
Do you have a tip, how I could change these functions to put all the possible results in the list here and proceed as if nothing happened? Or do you think, that this wont work anyway?
Thanks.
What you are looking for is a GLR parser generator (G is for generalized). Menhir is not such tool, and I doubt you could modify it easily to do what you want.
However, there is another tool that does exactly what you want: dypgen.
Is there any way to have an Alex macro defined in one source file and used in other source files? In my case, I have definitions for $LowerCaseLetter and $UpperCaseLetter (these are all letters except e and O, since they have special roles in my code). How can I refer to these macros from other .x files?
Disproving something exists is always harder than finding something that does exist, but I think the info below does show that Alex can only get macro definitions from the .x file it is reading (other than predefinied stuff like $white), and not via includes from other files....
You can get the sourcecode for Alex by doing the following:
> cabal unpack alex
> cd alex-3.1.3
In src/Main.hs, predefined macros are first set in variables called initSetEnv (charset macros $white, $printable, and "."), and initREEnv (regexp macros, there are none). This gets passed into runP, in src/ParseMonad.hs, which is used to hold the current parsing state, including all defined macros. The initial state is set using the values passed in, but macros can be added using a function called newSMac (or newRMac for regular expression macros).
Since this seems to be the only way that macros can be set, it is then only a matter of some grep bookkeeping to verify the only ways that macros can be added is through an actual macro definition in the source .x file. Unsurprisingly, Alex recursively uses its own .x/.y files for .x source file parsing (src/parser.y, src/Scan.x). It is a couple of levels of indirection away, but you can verify that the only way newSMac can be called is through the src/Scan.x macro
#smac = \$ #id | \$ \{ #id \}
<0> #smac #ws? \= { smacdef }
Other than some obvious predefined stuff, I don't believe reuse in lexers is all that typical anyway, because at the token level things are usually pretty simple (often simple tokens like SPACE, WORD, NUMBER, and a few operators, symbols and parens are all that are needed). The complexity comes at the parsing stage, although for technical reasons, parser-includes aren't that common either (see scannerless parsing for a newer technology that does allow reuse through nesting, like javascript embedded in html.... The tools for scannerless parsing are still pretty primitive though).
I have a LaTeX document where I'd like the numbering of floats (tables and figures) to be in one numeric sequence from 1 to x rather than two sequences according to their type. I'm not using lists of figures or tables either and do not need to.
My documentclass is report and typically my floats have captions like this:
\caption{Breakdown of visualisations created.}
\label{tab:Visualisation_By_Types}
A quick way to do it is to put \addtocounter{table}{1} after each figure, and \addtocounter{figure}{1} after each table.
It's not pretty, and on a longer document you'd probably want to either include that in your style sheet or template, or go with cristobalito's solution of linking the counters.
The differences between the figure and table environments are very minor -- little more than them using different counters, and being maintained in separate sequences.
That is, there's nothing stopping you putting your {tabular} environments in a {figure}, or your graphics in a {table}, which would mean that they'd end up in the same sequence. The problem with this case (as Joseph Wright notes) is that you'd have to adjust the \caption, so that doesn't work perfectly.
Try the following, in the preamble:
\makeatletter
\newcounter{unisequence}
\def\ucaption{%
\ifx\#captype\#undefined
\#latex#error{\noexpand\ucaption outside float}\#ehd
\expandafter\#gobble
\else
\refstepcounter{unisequence}% <-- the only change from default \caption
\expandafter\#firstofone
\fi
{\#dblarg{\#caption\#captype}}%
}
\def\thetable{\#arabic\c#unisequence}
\def\thefigure{\#arabic\c#unisequence}
\makeatother
Then use \ucaption in your tables and figures, instead of \caption (change the name ad lib). If you want to use this same sequence in other environments (say, listings?), then define \the<foo> the same way.
My earlier attempt at this is in fact completely broken, as the OP spotted: the getting-the-lof-wrong is, instead of being trivial and only fiddly to fix, absolutely fundamental (ho, hum).
(For the afficionados, it comes about because \advance commands are processed in TeX's gut, but the content of the .lof, .lot, and .aux files is fixed in TeX's mouth, at expansion time, thus what was written to the files was whatever random value \#tempcnta had at the point \caption was called, ignoring the \advance calculations, which were then dutifully written to the file, and then ignored. Doh: how long have I know this but never internalised it!?)
Dutiful retention of earlier attempt (on the grounds that it may be instructively wrong):
No problem: try putting the following in the preamble:
\makeatletter
\def\tableandfigurenum{\#tempcnta=0
\advance\#tempcnta\c#figure
\advance\#tempcnta\c#table
\#arabic\#tempcnta}
\let\thetable\tableandfigurenum
\let\thefigure\tableandfigurenum
\makeatother
...and then use the {table} and {figure} environments as normal. The captions will have the correct 'Table/Figure' text, but they'll share a single numbering sequence.
Note that this example gets the numbers wrong in the listoffigures/listoftables, but (a) you say you don't care about that, (b) it's fixable, though probably mildly fiddly, and (c) life is hard!
I can't remember the syntax, but you're essentially looking for counters. Have a look here, under the custom floats section. Assign the counters for both tables and figures to the same thing and it should work.
I'd just use one type of float (let's say 'figure'), then use the caption package to remove the automatically added "Figure" text from the caption and deal with it by hand.