I want to pass in some shape or form a dictionary/map/object into my clap app. I can preprocess the dict to turn it into some csv or whatever. My problem is that I cannot find in clap docs which characters are valid for the argument values and how to escape them. Is this unrelated to clap and instead shell specific?
Can I pass something like
myApp --dicty="a=1,b=3,qwe=yxc"
?
Is this unrelated to clap and instead shell specific?
Mostly, yes. clap is going to get whatever arguments the shell has determined and will parse that.
However clap has built-in support for value sets, from the readme:
Supports multiple values (i.e. -o <val1> -o <val2> or -o <val1> <val2>)
Supports delimited values (i.e. -o=val1,val2,val3, can also change the delimiter)
If that's not sufficient, then you'll have to define dicty as a String, you will receive the string a=1,b=3,qwe=yxc (I don't think you'll receive the quotes) then you'll have to parse that yourself, either by hand (regex/split/...) or with something more advanced (e.g. the csv crate though that's likely overkill).
That seems like a somewhat odd option value though.
FWIW structopt (which builds upon clap to provide a more declarative UI, and should be part of Clap 3) doesn't exactly have support for that sort of things but can be coerced into it relatively easily: https://github.com/TeXitoi/structopt/blob/master/examples/keyvalue.rs
With some modifications would allow something like
myApp -D a=1 -D b=3 -D que=yxc
or (though see comments in linked snippet for limitations)
myApp -D a=1 b=3 que=yxc
to be collected as a vec![("a", "1"), ("b", "3"), ("que", "yxc")] from which creating a hashmap is trivial.
Related
This is a further question on How to extract nightly features used in a crate?.
I also want to know where and how #![cfg_attr(target_os = "macos", feature(...))] is turned into #![feature(...).
Could someone give me some tips about the process of rustc dealing with cfg.
The rustc_expand crate is responsible for expanding macros. It special cases #[cfg] and #[cfg_attr] when parsing attribute macros such that they aren't treated like normal macros.
rustc_expand calls rustc_parse::parse_cfg_attr to parse the attribute.
With the parsed macro, rustc_expand calls rustc_attr::cfg_matches to evaluate if the condition is met.
rustc_expand then either includes or doesn't include the attribute passed as an argument.
I'm trying to figure out how to dynamically generating arguments from input arguments with Clap.
What I'm trying to emulate with Clap is the following python code:
parser = argparse.ArgumentParser()
parser.add_argument("-i", type=str, nargs="*")
(input_args, additional_args) = parser.parse_known_args()
for arg in input_args:
parser.add_argument(f'--{arg}-bar', required=true, type=str)
additional_config = parser.parse_args(additional_args)
So that you can do the following in your command:
./foo.py -i foo bar baz --foo-bar foo --bar-bar bar --baz-bar bar
and have the additional arguments be dynamically generated from the first arguments. Not sure if it's possible to do in Clap but I assumed it was maybe possible due to the Readme stating you could use the builder pattern to dynamically generate arguments[1].
So here is my naive attempt of trying to do this.
use clap::{Arg, App};
fn main() {
let mut app = App::new("foo")
.arg(Arg::new("input")
.short('i')
.value_name("INPUT")
.multiple(true)
.required(true));
let matches = app.get_matches_mut();
let input: Vec<_> = matches.values_of("input").unwrap().collect()
for i in input {
app.arg(Arg::new(&*format!("{}-bar", i)).required(true))
}
}
Which does not obviously having the compiler scream at you for both !format lifetime and app.arg I'm mostly interesting in solving how I could generate new arguments to the app which then could be matched against again. I'm quite new to rust so it's quite possible this is not possible with Clap.
[1] https://github.com/clap-rs/clap
I assumed it was maybe possible due to the Readme stating you could use the builder pattern to dynamically generate arguments[1].
Dynamically generating argument means that, you can .arg with runtime values and it'll work fine (aka the entire CLI doesn't need to be fully defined at compile-time, this distinction doesn't exist in Python as everything is done at runtime).
What you're doing here is significantly more complicated (and specialised, and odd) as you're passing through unknown parameters then re-parsing them.
Now first of all, you literally can't reuse App in clap: most of its methods (very much including get_matches) take self and therefore "consume" the App and return something else, either the original App or a result. Although you can clone the original App before you get_matches it I guess.
But I don't think that's useful here: though I have not tried it should be possible do do what you want using TrailingVarArg: this would collect all trailing arguments into a single positional arg slice (you will probably need AllowLeadingHyphen as well), then you can create a second App with dynamically generated parameters in order to parse that sub-set of arguments (get_matches_from will parse from an iterator rather than the env args, this is useful for testing... or for this exact sort of situations).
It seems that the inline macro pass does not work for CLI attributes.
If I render the following snippet:
:foo: crazy
:bar: pass:q,a[{foo} *world*]
hello {bar}
I get what I expect: hello crazy world
But if I pass the two attributes to the CLI (asciidoctor-pdf -a foo=crazy -a bar='pass:q,a[{foo} *world*]' foo.adoc), it does not work:
hello pass:q,a[{foo} *world*]
What can I do to make it work?
To add a little context, I plan to use antora to write the documentation of the software I am developing. I would like to define attributes in the antora-playbook.yml or the antora.yml to act as 'latex macros'.
Attributes specified on the command-line are treated as strings, not Asciidoc markup. That means that the pass macro isn't being processed.
However, by default, attributes specified on the command-line override attributes specified within a document. So, you can use your document as described above, with the attribute definitions included, and then you can run:
asciidoctor-pdf -a foo=stable foo.adoc
The command-line definition for the foo attribute overrides the in-document definition, and the result is hello stable world.
I'm trying to pass 2 mandatory arguments for a Node.js program I'm creating.
I'm using Yargs to do it like this:
const yarg = require("yargs")
.usage("hi")
.options("m", {demandOption: true})
.options("m2", {demandOption: true})
.argv;
This works fine with a slight issue. I want to activate the script like this:
node index.js -m val -m2 val2
It doesn't work and I get an error saying m2 is missing. Only when I add another - before m2 it works, meaning I have to do it like this:
node index.js -m val1 --m2 val2
Is there a way to make it accept the args like I wanted to in the first place?
You can't do what you're asking for with yargs, and arguably you shouldn't. It's easy to see why you can't by looking at what yargs-parser (the module yargs uses to parse your arguments) returns for the different argument styles you mentioned:
console.log(yargsParser(['-m', 'A']));
console.log(yargsParser(['-m2', 'B']));
console.log(yargsParser(['--m2', 'C']));
<script src="https://bundle.run/yargs-parser#11.1.1"></script>
As you can see:
-m A is parsed as option m with value A.
-m2 B is parsed as option m with value 2 and, separately, array parameter B.
--m2 C is parsed as option m2 with value C.
There is not a way to make the parser behave differently. This is with good reason: Most command line tools (Windows notwithstanding) behave this way.
By decades-old convention, "long" options use two dashes followed by a human-readable name, like --max-count. "Short" options, which are often aliases for long ones, use a single dash followed by a single character, like -m, and—this is the crucial bit—if the option takes a value, the space between the option and the value can be omitted. This is why when yargs sees -m2, it assumes 2 is the value for the m option.
If you want to use yargs (and not confuse command line-savvy users) you'll you need to change your options to either 1. --m and --m2 or, 2. -m and (for example) -n.
Is there any way to have an Alex macro defined in one source file and used in other source files? In my case, I have definitions for $LowerCaseLetter and $UpperCaseLetter (these are all letters except e and O, since they have special roles in my code). How can I refer to these macros from other .x files?
Disproving something exists is always harder than finding something that does exist, but I think the info below does show that Alex can only get macro definitions from the .x file it is reading (other than predefinied stuff like $white), and not via includes from other files....
You can get the sourcecode for Alex by doing the following:
> cabal unpack alex
> cd alex-3.1.3
In src/Main.hs, predefined macros are first set in variables called initSetEnv (charset macros $white, $printable, and "."), and initREEnv (regexp macros, there are none). This gets passed into runP, in src/ParseMonad.hs, which is used to hold the current parsing state, including all defined macros. The initial state is set using the values passed in, but macros can be added using a function called newSMac (or newRMac for regular expression macros).
Since this seems to be the only way that macros can be set, it is then only a matter of some grep bookkeeping to verify the only ways that macros can be added is through an actual macro definition in the source .x file. Unsurprisingly, Alex recursively uses its own .x/.y files for .x source file parsing (src/parser.y, src/Scan.x). It is a couple of levels of indirection away, but you can verify that the only way newSMac can be called is through the src/Scan.x macro
#smac = \$ #id | \$ \{ #id \}
<0> #smac #ws? \= { smacdef }
Other than some obvious predefined stuff, I don't believe reuse in lexers is all that typical anyway, because at the token level things are usually pretty simple (often simple tokens like SPACE, WORD, NUMBER, and a few operators, symbols and parens are all that are needed). The complexity comes at the parsing stage, although for technical reasons, parser-includes aren't that common either (see scannerless parsing for a newer technology that does allow reuse through nesting, like javascript embedded in html.... The tools for scannerless parsing are still pretty primitive though).