How can I use arbitrary text as a function name in Rust? - rust

Is there a way in Rust to use any text as a function name? Something like:
fn 'This is the name of the function' { ... }
I find it useful for test functions and it is is allowed by other languages.

There's no way. According to the official reference:
An identifier is any nonempty ASCII string of the following form:
Either
The first character is a letter.
The remaining characters are alphanumeric or _.
Or
The first character is _.
The identifier is more than one character. _ alone is not an identifier.
The remaining characters are alphanumeric or _.
A raw identifier is like a normal identifier, but prefixed by r#. (Note that
the r# prefix is not included as part of the actual identifier.)
Unlike a normal identifier, a raw identifier may be any strict or reserved
keyword except the ones listed above for RAW_IDENTIFIER.

You can't have spaces in function names (and this is true of most programming languages). Usual practice for function names in Rust is to replace spaces with underscores, so the following is allowed:
fn This_is_the_name_of_the_function { ... }
although usual practice would use a lower-case t

Related

Can you set multiple (different) tags with the same value?

For some of my projects, I have had to use the viper package to use configuration.
The package requires you to add the mapstructure:"fieldname" to identify and set your configuration object's fields correctly, but I have also had to add other tags for other purposes, leading to something looking like the following :
type MyStruct struct {
MyField string `mapstructure:"myField" json:"myField" yaml:"myField"`
}
As you can see, it is quite redundant for me to write tag:"myField" for each of my tag, so I was wondering if there was any way to "bundle" them up and reduce the verbosity, with something like this mapstructure,json,yaml:"myField"
Or is it simply not possible and you must specify every tag separately ?
Struct tags are arbitrary string literals. Data stored in struct tags may look like whatever you want them to be, but if you don't follow the conventions, you'll have to write your own parser / processing logic. If you follow the conventions, you may use StructTag.Get() and StructTag.Lookup() to easily get tag values.
The conventions do not support "merging" multiple tags, so just write them all out.
The conventions, quoted from reflect.StructTag:
By convention, tag strings are a concatenation of optionally space-separated key:"value" pairs. Each key is a non-empty string consisting of non-control characters other than space (U+0020 ' '), quote (U+0022 '"'), and colon (U+003A ':'). Each value is quoted using U+0022 '"' characters and Go string literal syntax.
See related question: What are the use(s) for tags in Go?

Exclude some characters in Unicode category

I'm trying to implement a rule along the lines of "all characters in the Letter and Symbol Unicode categories except a few reserved characters." From the lexer rules, I know I can use \p{___} to match against Unicode categories, but I am unsure of how to handle excluding certain characters.
Looking at example grammars, I am led a few different directions. For example, the Java 9 grammar seems to use predicates in order to directly use Java's built in isJavaIdentifier() while others manually define every valid character.
How can I achieve this functionality?
Without target specific code, you will have to define the ranges yourself so that the chars you want to exclude are not part of these ranges. You cannot use \p{...} and then exclude certain characters from it.
With target specific code, you can do as in the Java 9 grammar:
#lexer::members {
boolean aCustomMethod(int character) {
// Your logic to see if 'character' is valid. You're sure
// that it's at least a char from \p{Letter} or \p{Symbol}
return true;
}
}
TOKEN
: [\p{Letter}\p{Symbol}] {aCustomMethod(_input.LA(-1))}?
;

XML schema restriction pattern for not allowing specific string

I need to write an XSD schema with a restriction on a field, to ensure that
the value of the field does not contain the substring FILENAME at any location.
For example, all of the following must be invalid:
FILENAME
ORIGINFILENAME
FILENAMETEST
123FILENAME456
None of these values should be valid.
In a regular expression language that supports negative lookahead, I could do this by writing /^((?!FILENAME).)*$ but the XSD pattern language does not support negative lookahead.
How can I implement an XSD pattern restriction with the same effect as /^((?!FILENAME).)*$ ?
I need to use pattern, because I don't have access to XSD 1.1 assertions, which are the other obvious possibility.
The question XSD restriction that negates a matching string covers a similar case, but in that case the forbidden string is forbidden only as a prefix, which makes checking the constraint easier. How can the solution there be extended to cover the case where we have to check all locations within the input string, and not just the beginning?
OK, the OP has persuaded me that while the other question mentioned has an overlapping topic, the fact that the forbidden string is forbidden at all locations, not just as a prefix, complicates things enough to require a separate answer, at least for the XSD 1.0 case. (I started to add this answer as an addendum to my answer to the other question, and it grew too large.)
There are two approaches one can use here.
First, in XSD 1.1, a simple assertion of the form
not(matches($v, 'FILENAME'))
ought to do the job.
Second, if one is forced to work with an XSD 1.0 processor, one needs a pattern that will match all and only strings that don't contain the forbidden substring (here 'FILENAME').
One way to do this is to ensure that the character 'F' never occurs in the input. That's too drastic, but it does do the job: strings not containing the first character of the forbidden string do not contain the forbidden string.
But what of strings that do contain an occurrence of 'F'? They are fine, as long as no 'F' is followed by the string 'ILENAME'.
Putting that last point more abstractly, we can say that any acceptable string (any string that doesn't contain the string 'FILENAME') can be divided into two parts:
a prefix which contains no occurrences of the character 'F'
zero or more occurrences of 'F' followed by a string that doesn't match 'ILENAME' and doesn't contain any 'F'.
The prefix is easy to match: [^F]*.
The strings that start with F but don't match 'FILENAME' are a bit more complicated; just as we don't want to outlaw all occurrences of 'F', we also don't want to outlaw 'FI', 'FIL', etc. -- but each occurrence of such a dangerous string must be followed either by the end of the string, or by a letter that doesn't match the next letter of the forbidden string, or by another 'F' which begins another region we need to test. So for each proper prefix of the forbidden string, we create a regular expression of the form
$prefix || '([^F' || next-character-in-forbidden-string || ']'
|| '[^F]*'
Then we join all of those regular expressions with or-bars.
The end result in this case is something like the following (I have inserted newlines here and there, to make it easier to read; before use, they will need to be taken back out):
[^F]*
((F([^FI][^F]*)?)
|(FI([^FL][^F]*)?)
|(FIL([^FE][^F]*)?)
|(FILE([^FN][^F]*)?)
|(FILEN([^FA][^F]*)?)
|(FILENA([^FM][^F]*)?)
|(FILENAM([^FE][^F]*)?))*
Two points to bear in mind:
XSD regular expressions are implicitly anchored; testing this with a non-anchored regular expression evaluator will not produce the correct results.
It may not be obvious at first why the alternatives in the choice all end with [^F]* instead of .*. Thinking about the string 'FEEFIFILENAME' may help. We have to check every occurrence of 'F' to make sure it's not followed by 'ILENAME'.

Single-quote notation for characters in Coq?

In most programming languages, 'c' is a character and "c" is a string of length 1. But Coq (according to its standard ascii and string library) uses "c" as the notation for both, which requires constant use of Open Scope to clarify which one is being referred to. How can you avoid this and designate characters in the usual way, with single quotes? It would be nice if there is a solution that only partially overrides the standard library, changing the notation but recycling the rest.
Require Import Ascii.
Require Import String.
Check "a"%char.
Check "b"%string.
or this
Program Definition c (s:string) : ascii :=
match s with "" => " "%char | String a _ => a end.
Check (c"A").
Check ("A").
I am quite confident that there is no smart way of doing this, but there is a somewhat annoying one: simply declare one notation for each character.
Notation "''c''" := "c" : char_scope.
Notation "''a''" := "a" : char_scope.
Check 'a'.
Check 'c'.
It shouldn't be too hard to write a script for automatically generating those declarations. I don't know if this has any negative side-effects on Coq's parser, though.

Allowing only usernames using "reasonable" characters

A username for a website can contain the space character, and yet it cannot be composed only of space characters. It can contain some symbols (like underscore and dash), but starting with certain symbols would look weird. Non-latin letters should be allowed, preferably for all languages, but tab and newline characters shouldn't. And definitely no Zalgo.
The rules composing what should and shouldn't be allowed in a reasonable naming system are complicated, however they are virtually the same for every website. Reimplementing them is probably a bad idea. Where can I find an implementation? I'm using PHP.
You should validate the username entered by the new user against a regular expression that run a match against the allowed character set.
Example: The following allows only english alphanumeric characters and - and _.
function isNewUsernameValid ($name, $filter = "[^a-zA-Z0-9\-\_\.]"){
return preg_match("~" . $filter . "~iU", $name) ? false : true;
}
if ( !isNewUsernameValid ($name) ){
print "Not a valid name.";
}
For your particular case, you'll have to come up with and test the regular expression.

Resources