I am currently working on a project where I am serializing certain variants of an enum to prepare for an HTTP request. I would like to know if there is a way to identify which variants of the enum are not being used in this process, as I would like to potentially remove them in order to keep my code clean and organized. The enum is not matched anywhere in the code. Is there a specific method or tool that can be used to achieve this? Thank you for your time and help.
For example all but "UseCase4" are in use:
#[derive(Clone, Debug, Serialize, Deserialize)]
pub enum UseCases {
UseCase1,
UseCase2,
UseCase3,
UseCase4,
}
One potential solution would be to find an IDE extension that can assist with this task. I am currently using VS Code and IntelliJ IDEA, so a solution that works with either of these IDEs would be ideal.
I could not find any information.
There is no way to do this automatically when using dynamic data. The best you can do is look at your input data and determine what possibilities exist in it and check manually. This is a limitation of any deserialization library in any language.
If it is highly structured data and has a specification, you may be able to write a small tool that scans the specification and pulls out the possible values yourself.
Good luck.
Related
Pest.rs makes it very easy to write grammars, but my current grammar in complex_conf.pest is 183 lines and growing. Is there an easy method to break this up such that the normal decorator macro show below,
#[derive(Parser)]
#[grammar = "complex_conf"]
pub struct ConfParser;
can still be made to work?
I hesitate to answer, as I'm not the person to speak for the community here.
But it seems this is a known pain point which they're planning to fixing. Issue opened in 2018.
Is there any reason for me to use the Hash abstraction of sr-primitives instead of using the substrate_primitives::hash and substrate_primitives::hashing modules?
It's just that it seems much easier to include H256 in my code (and use the corresponding hashing functions) than to use the Hash trait.
Substrate is built to be generic and highly customizable. When you write your modules and runtime logic around the Hash trait, you gain the benefits of your module being generic over the specific type of Hash being used in the runtime.
In this case, you do not need to depend on a specific type in your runtime like H256. Instead, you have the ability to write runtime logic which only depends on the properties of a Trait. That means, if at some later point, you wanted to implement a different hash function which results in a different Hash type, you would not have to rewrite any code.
Additionally, if you plan on sharing the modules you develop with others, you will want to keep your module as generic as possible as to not force the end blockchain developer to conform to your standards.
These abstractions do add some complexity, and is not strictly needed to make things work. However, it is best practice, and something you might find pays dividends in the long term.
The lexer/parser file located here is quite large and I'm not sure if it is suitable for just retrieving a list of Rust functions. Perhaps writing my own/using another library would be a better route to take?
The end objective would be to create a kind of execution manager. To contextualise, it would be able to read a list of function calls wrapped in a function. The function calls that are within the function will then be able to be re/ordered from some web interface. Thought it might be nice to manage larger applications this way.
No. I mean, not really. Whether you write your own parser or re-use syntex, you're going to hit a fundamental limitation: macros.
So let's say you go all-out and expand macro_rules!-based macros, including the ones defined in external crates (which means you'll also need to extract rustc's crate metadata loading... which isn't stable). What about procedural macros and custom derive attributes? Those are defined in code and depend on compiler-internal interfaces to function.
The only way this is likely to ever work correctly is if you build on top of the compiler, or duplicate a huge amount of work (which also involves unstable binary interfaces).
You could use syntex to parse the Rust code in a build script.
This is the cousin of this question over here asking the same thing for C.
Basically, is there a better way than to just turn it into a giant byte array and putting it in a source file?
Alternatively, does a macro have the ability to do this? (Rust Macros... are a dense looking and their exact capabilities are not known to me.)
You probably want include_bytes!.
If you are in older versions of Rust, use include_bin! instead.
You could alternatively use this tool https://github.com/pyros2097/rust-embed Which was created by me which generates rust code for your resources.
Is there a way to get a summary of the instantiated templates (with what types and how many times - like a histogram) within a translation unit or for the whole project (shared object/executable)?
If I have a large codebase and I want to take advantage of the C++11 extern keyword I would like to know which templates are most used within my project (or from the internals of stl - like std::less<MyString> for example).
Also is it possible to have a weight assigned to each template instantiation (time spent by the compiler)?
Even if only one (c++11 enabled) compiler gives me such statistics I would be happy.
How difficult would it be to implement such a thing with Clang's LibTooling?
And is this even reasonable? Many people told me that I can reason which template instantiations I should extern without the use of a tool...
There are several ways to attack this problem.
If you are working with an open-source compiler, it's not hard to make a simple change to the source code that will trace all template substantiations.
If that sounds like too much hassle, you can also try to force the compiler to produce a warning on each template instantiation for a given symbol. Steven Watanabe has written a set of tools that can help you with that.
Finally, possibly the best options is to use the debugging symbols (or map files), generated by the compiler, to track down how many times each function appears in the final image and more importantly how much does it add to the weight in bytes. The best example for such a tool is Andrian Stone's SymbolSort, which is based on the Microsoft's toolset. Another similar tool is the Map File Browser.