I am new to the concept of lexing and am trying to write a lexer in ocaml to read the following example input:
(blue, 4, dog, 15)
Basically the input is a list of any random string or integer. I have found many examples for int based inputs as most of them model a calculator, but have not found any guidance through examples or the documentation for lexing strings. Here is what I have so far as my lexer:
(* File lexer.mll *)
{
open Parser
}
rule lexer_main = parse
[' ' '\r' '\t'] { lexer_main lexbuf } (* skip blanks *)
| ['0'-'9']+ as lxm { INT(int_of_string lxm) }
| '(' { LPAREN }
| ')' { RPAREN }
| ',' { COMMA }
| eof { EOF }
| _ { syntax_error "couldn't identify the token" }
As you can see I am missing the ability to parse strings. I am aware that a string can be represented in the form ['a'-'z'] so would it be as simple as ['a'-'z'] { STRING }
Thanks for your help.
The notation ['a'-'z'] represents a single character, not a string. So a string is more or less a sequence of one or more of those. I have a fear that this is an assignment, so I'll just say that you can extend a pattern for a single character into a pattern for a sequence of the same kind of character using the same technique you're using for INT.
However, I wonder whether you really want your strings to be so restrictive. Are they really required to consist of alphabetic characters only?
Related
I am fairly new to PowerShell programming, so need help with set of strings I have as described below:
"14-2-1-1"
"14-2-1-1-1"
"14-2-1-1-10"
I want to pad zero to each number in between - if that number is between 1 and 9. So the result should look like:
"14-02-01-01"
"14-02-01-01-01"
"14-02-01-01-10"
I came up with the following code but was wondering if there is a better/faster solution.
$Filenum = "14-2-1-1"
$hicount = ($Filenum.ToCharArray() | Where-Object{$_ -eq '-'} | Measure-Object).Count
$FileNPad = ''
For ($i=0; $i -le $hicount; $i++) {
$Filesec= "{$i}" -f $Filenum.split('-')
If ([int]$Filesec -le 9)
{
$FileNPad = "$FileNPad-"+"0"+"$Filesec"
}
Else
{
$FileNPad="$FileNPad-$Filesec"
}
}
$FileNPad = $FileNPad.Trim("-"," ")
Instead of trying to manually keep track of how many elements and inspect each value, you can simply split on -, padleft, then join back together with -
"14-2-1-1","14-2-1-1-1","14-2-1-1-10" | ForEach-Object {
($_ -split '-').PadLeft(2,'0') -join '-'
}
Which outputs
14-02-01-01
14-02-01-01-01
14-02-01-01-10
I'd be inclined to go with something like Doug Maurer's answer due to its clarity, but here's another way to look at this. The last section below shows a solution that might be just as clear with some advantages of its own.
Pattern of digit groups in the input
Your input strings are composed of one or more groups, where each group...
...contains one or more digits, and...
...is preceded by a - or the beginning of the string, and...
...is followed by a - or the end of the string.
Groups that require a leading "0" to be inserted contain exactly one digit; that is, they consist of...
...a - or the beginning of the string, followed by...
...a single digit, followed by...
...a - or the end of the string.
Replacing single-digit digit groups using the -replace operator
We can use regular expressions with the -replace operator to locate that pattern and replace the single digit with a "0" followed by that same digit...
'0-0-0-0', '00-00-00-00', '1-2-3-4', '01-02-03-04', '10-20-30-40', '11-22-33-44' |
ForEach-Object -Process { $_ -replace '(?<=-|^)(\d)(?=-|$)', '0$1' }
...which outputs...
00-00-00-00
00-00-00-00
01-02-03-04
01-02-03-04
10-20-30-40
11-22-33-44
As the documentation describes, the -replace operator is used like this...
<input> -replace <regular-expression>, <substitute>
The match pattern
The match pattern '(?<=-|^)(\d)(?=-|$)' means...
(?<=-|^): A zero-width positive lookbehind assertion for either - or the beginning of the string
In other words, match but don't capture either - or the beginning of the string
(\d): A single digit, captured and made available with the replacement substitution $1
(?=-|$): A zero-width positive lookahead assertion for either - or the end of the string
In other words, match but don't capture either - or the end of the string
The replacement pattern
The replacement pattern '0$1' means...
The literal text '0', followed by...
The value of the first capture ((\d))
Replacing single-digit digit groups using [Regex]::Replace() and a replacement [String]
Instead of the -replace operator you can also call the static Replace() method of the [Regex] class...
'0-0-0-0', '00-00-00-00', '1-2-3-4', '01-02-03-04', '10-20-30-40', '11-22-33-44' |
ForEach-Object -Process { [Regex]::Replace($_, '(?<=-|^)(\d)(?=-|$)', '0$1') }
...and the result is the same.
Replacing digit groups using [Regex]::Replace() and a [MatchEvaluator]
A hybrid of the regular expression and imperative solutions is to call an overload of the Replace() method that takes a [MatchEvaluator] instead of a replacement [String]...
# This [ScriptBlock] will be passed to a [System.Text.RegularExpressions.MatchEvaluator] parameter
$matchEvaluator = {
# The [System.Text.RegularExpressions.Match] parameter
param($match)
# The replacement [String]
return $match.Value.PadLeft(2, '0')
}
'0-0-0-0', '00-00-00-00', '1-2-3-4', '01-02-03-04', '10-20-30-40', '11-22-33-44' |
ForEach-Object -Process { [Regex]::Replace($_, '(\d+)', $matchEvaluator) }
This produces the same result as above.
A [MatchEvaluator] is a delegate that takes a Match to be replaced ($match) and returns the [String] with which to replace it (the matched text left-padded to two digits). Also note that, whereas above we are capturing only standalone digits (\d), here we capture all groups of one or more digits (\d+) and leave it to PadLeft() to figure out if a leading "0" is needed.
I think this is a much more compelling solution than regular expressions alone because it's the best of that and the imperative world:
It uses a simple regular expression pattern to locate digit groups in the input string
It uses a simple [ScriptBlock] to transform digit groups in the input string
By not splitting the input string apart it does not create as much intermediate string and array garbage
Whether this potential performance improvement is overshadowed by using regular expressions at all, I can't say without benchmarking
I'm currently writing an interpreter for a language I have designed.
The lexer/parser (GLR) is written in Flex/Bison and the main interpreter in D - and everything working flawlessly so far.
The thing is I want to also add string interpolation, that is identify string literals that contain a specific pattern (e.g. "[some expression]") and convert the included expression. I think this should be done at parser level, from within the corresponding Grammar action.
My idea is converting/treating the interpolated string as what it would look like with simple concatenation (as it works right now).
E.g.
print "this is the [result]. yay!"
to
print "this is the " + result + ". yay!"
However, I'm a bit confused as to how I could do that in Bison: basically, how do I tell it to re-parse a specific string (while constructing the main AST)?
Any ideas?
You could reparse the string, if you really wanted you, by generating a reentrant parser. You would probably want a reentrant scanner, as well, although I suppose you could kludge something together with a default scanner, using flex's buffer stack. Indeed, it's worth learning how to build reentrant parsers and scanners on the general principle of avoiding unnecessary globals, whether or not you need them for this particular purpose.
But you don't really need to reparse anything; you can do the entire parse in one pass. You just need enough smarts in your scanner so that it knows about nested interpolations.
The basic idea is to let the scanner split the string literal with interpolations into a series of tokens, which can easily be assembled into an appropriate AST by the parser. Since the scanner may return more than one token out of a single string literal, we'll need to introduce a start condition to keep track of whether the scan is currently inside a string literal or not. And since interpolations can, presumably, be nested we'll use flex's optional start condition stack, enabled with %option stack, to keep track of the nested contexts.
So here's a rough sketch.
As mentioned, the scanner has extra start conditions: SC_PROGRAM, the default, which is in effect while the scanner is scanning regular program text, and SC_STRING, in effect while the scanner is scanning a string. SC_PROGRAM is only needed because flex does not provide an official interface to check whether the start condition stack is empty; aside from nesting, it is identical to the INITIAL top-level start condition. The start condition stack is used to keep track of interpolation markers ([ and ] in this example), and it is needed because an interpolated expression might use brackets (as array subscripts, for example) or might even include a nested interpolated string. Since SC_PROGRAM is, with one exception, identical to INITIAL, we'll make it an inclusive rule.
%option stack
%s SC_PROGRAM
%x SC_STRING
%%
Since we're using a separate start condition to analyse string literals, we can also normalise escape sequences as we parse. Not all applications will want to do this, but it's pretty common. But since that's not really the point of this answer, I've left out most of the details. More interesting is the way that embedded interpolation expressions are handled, particularly deeply nested ones.
The end result will be to turn string literals into a series of tokens, possibly representing a nested structure. In order to avoid actually parsing in the scanner, we don't make any attempt to create AST nodes or otherwise rewrite the string literal; instead, we just pass the quote characters themselves through to the parser, delimiting the sequence of string literal pieces:
["] { yy_push_state(SC_STRING); return '"'; }
<SC_STRING>["] { yy_pop_state(); return '"'; }
A very similar set of rules is used for interpolation markers:
<*>"[" { yy_push_state(SC_PROGRAM); return '['; }
<INITIAL>"]" { return ']'; }
<*>"]" { yy_pop_state(); return ']'; }
The second rule above avoids popping the start condition stack if it is empty (as it will be in the INITIAL state). It's not necessary to issue an error message in the scanner; we can just pass the unmatched close bracket through to the parser, which will then do whatever error recovery seems necessary.
To finish off the SC_STRING state, we need to return tokens for pieces of the string, possibly including escape sequences:
<SC_STRING>{
[^[\\"]+ { yylval.str = strdup(yytext); return T_STRING; }
\\n { yylval.chr = '\n'; return T_CHAR; }
\\t { yylval.chr = '\t'; return T_CHAR; }
/* ... Etc. */
\\x[[:xdigit]]{2} { yylval.chr = strtoul(yytext, NULL, 16);
return T_CHAR; }
\\. { yylval.chr = yytext[1]; return T_CHAR; }
}
Returning escaped characters like that to the parser is probably not the best strategy; normally I would use an internal scanner buffer to accumulate the entire string. But it was simple for illustrative purposes. (Some error handling is omitted here; there are various corner cases, including newline handling and the annoying case where the last character in the program is a backslash inside an unterminated string literal.)
In the parser, we just need to insert a concatenation node for interpolated strings. The only complication is that we don't want to insert such a node for the common case of a string literal without any interpolations, so we use two syntax productions, one for a string with exactly one contained piece, and one for a string with two or more pieces:
string : '"' piece '"' { $$ = $2; }
| '"' piece piece_list '"' { $$ = make_concat_node(
prepend_to_list($2, $3));
}
piece : T_STRING { $$ = make_literal_node($1); }
| '[' expr ']' { $$ = $2; }
piece_list
: piece { $$ = new_list($1); }
| piece_list piece { $$ = append_to_list($1, $2); }
So I'm writing simple parsers for some programming languages in SWI-Prolog using Definite Clause Grammars. The goal is to return true if the input string or file is valid for the language in question, or false if the input string or file is not valid.
In all almost all of the languages there is an "identifier" predicate. In most of the languages the identifier is defined as the one of the following in EBNF: letter { letter | digit } or ( letter | digit ) { letter | digit }, that is to say in the first case a letter followed by zero or more alphanumeric characters, or i
My input file is split into a list of word strings (i.e. someIdentifier1 = 3 becomes the list [someIdentifier1,=,3]). The reason for the string to be split into lists of words rather than lists of letters is for recognizing keywords defined as terminals.
How do I implement "identifier" so that it recognizes any alphanumeric string or a string consisting of a letter followed by alphanumeric characters.
Is it possible or necessary to further split the word into letters for this particular predicate only, and if so how would I go about doing this? Or is there another solution, perhaps using SWI-Prolog libraries' built-in predicates?
I apologize for the poorly worded title of this question; however, I am unable to clarify it any further.
First, when you need to reason about individual letters, it is typically most convenient to reason about lists of characters.
In Prolog, you can easily convert atoms to characters with atom_chars/2.
For example:
?- atom_chars(identifier10, Cs).
Cs = [i, d, e, n, t, i, f, i, e, r, '1', '0'].
Once you have such characters, you can used predicates like char_type/2 to reason about properties of each character.
For example:
?- char_type(i, T).
T = alnum ;
T = alpha ;
T = csym ;
etc.
The general pattern to express identifiers such as yours with DCGs can look as follows:
identifier -->
[L],
{ letter(L) },
identifier_rest.
identifier_rest --> [].
identifier_rest -->
[I],
{ letter_or_digit(I) },
identifier_rest.
You can use this as a building block, and only need to define letter/1 and letter_or_digit/1. This is very easy with char_type/2.
Further, you can of course introduce an argument to relate such lists to atoms.
I am trying to write a grammar that will match the finite closure pattern for regular expressions ( i.e foo{1,3} matches 1 to 3 'o' appearances after the 'fo' prefix )
To identify the string {x,y} as finite closure it must not include spaces for example { 1, 3} is recognized as a sequence of seven characters.
I have written the following lexer and parser file but i am not sure if this is the best solution. I am using a lexical mode for the closure pattern which is activated when a regular expression matches a valid closure expression.
lexer grammar closure_lexer;
#header { using System;
using System.IO; }
#lexer::members{
public static bool guard = true;
public static int LBindex = 0;
}
OTHER : .;
NL : '\r'? '\n' ;
CLOSURE_FLAG : {guard}? {LBindex =InputStream.Index; }
'{' INTEGER ( ',' INTEGER? )? '}'
{ closure_lexer.guard = false;
// Go back to the opening brace
InputStream.Seek(LBindex);
Console.WriteLine("Enter Closure Mode");
Mode(CLOSURE);
} -> skip
;
mode CLOSURE;
LB : '{';
RB : '}' { closure_lexer.guard = true;
Mode(0); Console.WriteLine("Enter Default Mode"); };
COMMA : ',' ;
NUMBER : INTEGER ;
fragment INTEGER : [1-9][0-9]*;
and the parser grammar
parser grammar closure_parser;
#header { using System;
using System.IO; }
options { tokenVocab = closure_lexer; }
compileUnit
: ( other {Console.WriteLine("OTHER: {0}",$other.text);} |
closure {Console.WriteLine("CLOSURE: {0}",$closure.text);} )+
;
other : ( OTHER | NL )+;
closure : LB NUMBER (COMMA NUMBER?)? RB;
Is there a better way to handle this situation?
Thanks in advance
This looks quite complex for such a simple task. You can easily let your lexer match one construct (preferably that without whitespaces, if you usually skip them) and the parser matches the other form. You don't even need lexer modes for that.
Define your closure rule:
CLOSURE
: OPEN_CURLY INTEGER (COMMA INTEGER?)? CLOSE_CURLY
;
This rule will not match any form that contains e.g. whitespaces. So, if your lexer does not match CLOSURE you will get all the individual tokens like the curly braces and integers ending up in your parser for matching (where you then can treat them as something different).
NB: doesn't the closure definition also allow {,n} (same as {n})? That requires an additional alt in the CLOSURE rule.
And finally a hint: your OTHER rule will probably give you trouble as it matches any char and is even located before other rules. If you have a whildcard rule then it should be the last in your grammar, matching everything not matched by any other rule.
I saw the operator r#"" in Rust but I can't find what it does. It came in handy for creating JSON:
let var1 = "test1";
let json = r#"{"type": "type1", "type2": var1}"#;
println!("{}", json) // => {"type2": "type1", "type2": var1}
What's the name of the operator r#""? How do I make var1 evaluate?
I can't find what it does
It has to do with string literals and raw strings. I think it is explained pretty well in this part of the documentation, in the code block that is posted there you can see what it does:
"foo"; r"foo"; // foo
"\"foo\""; r#""foo""#; // "foo"
"foo #\"# bar";
r##"foo #"# bar"##; // foo #"# bar
"\x52"; "R"; r"R"; // R
"\\x52"; r"\x52"; // \x52
It negates the need to escape special characters inside the string.
The r character at the start of a string literal denotes a raw string literal. It's not an operator, but rather a prefix.
In a normal string literal, there are some characters that you need to escape to make them part of the string, such as " and \. The " character needs to be escaped because it would otherwise terminate the string, and the \ needs to be escaped because it is the escape character.
In raw string literals, you can put an arbitrary number of # symbols between the r and the opening ". To close the raw string literal, you must have a closing ", followed by the same number of # characters as there are at the start. With zero or more # characters, you can put literal \ characters in the string (\ characters do not have any special meaning). With one or more # characters, you can put literal " characters in the string. If you need a " followed by a sequence of # characters in the string, just use the same number of # characters plus one to delimit the string. For example: r##"foo #"# bar"## represents the string foo #"# bar. The literal doesn't stop at the quote in the middle, because it's only followed by one #, whereas the literal was started with two #.
To answer the last part of your question, there's no way to have a string literal that evaluates variables in the current scope. Some languages, such as PHP, support that, but not Rust. You should consider using the format! macro instead. Note that for JSON, you'll still need to double the braces, even in a raw string literal, because the string is interpreted by the macro.
fn main() {
let var1 = "test1";
let json = format!(r#"{{"type": "type1", "type2": {}}}"#, var1);
println!("{}", json) // => {"type2": "type1", "type2": test1}
}
If you need to generate a lot of JSON, there are many crates that will make it easier for you. In particular, with serde_json, you can define regular Rust structs or enums and have them serialized automatically to JSON.
The first time I saw this weird notation is in glium tutorials (old crate for graphics management) and is used to "encapsulate" and pass GLSL code (GL Shading language) to shaders of the GPU
https://github.com/glium/glium/blob/master/book/tuto-02-triangle.md
As far as I understand, it looks like the content of r#...# is left untouched, it is not interpreted in any way. Hence raw string.