How to use MFC Resource id stored as string - string

I am developing an MFC application in which I have menus defined in .rc file. I have an requirement for removing of few menu items at run time which are defined in xml file.
The menu ids are stored as string in xml as like below
<exclusionmenu>ID_FILE_NEW</exclusionmenu>
<exclusionmenu>ID_FILE_OPEN</exclusionmenu>
From xml the menu ids are retrieved as string,
RemoveMenu function expects UINT (menu id),
How to convert the menu id string defined in xml to uint menu id
Note: This is not direct cstring to uint conversion, ID_FILE_NEW is macro and it has int value.

The symbolic names for resource identifiers are defined in a header file, Resource.h by default. In source code and resource scripts, the symbolic names are substituted for their respective numeric values by the preprocessor. When compilation begins, the symbolic information is already gone.
To implement a scheme that uses symbolic names for configuration, you have to extract and preserve the mapping between symbolic names and resource identifiers for later use at runtime, or apply the mapping to your configuration files prior to deployment. The following is a list of potential options:
Use an associative container and populate it at application startup: An appropriate container would be std::map<std::string, unsigned int>. Populating this container is conveniently performed using C++11's list initialization feature:
static std::map<std::string, unsigned int> IdMap = {
{"ID_FILE_NEW", ID_FILE_NEW},
{"ID_FILE_OPEN", ID_FILE_OPEN},
// ...
}
At runtime you can use this container to retrieve the resource identifier given its symbolic constant:
unsigned int GetId(const std::string& name) {
if (IdMap.find(name) == IdMap.end())
throw std::runtime_error("Unknown resource identifier.");
return IdMap[name];
}
The downside to this approach is that you have to keep IdMap and the resources in sync. Whenever a resource is added, modified, or removed, the container contents must be updated to account for the changes made.
Parse Resource.h and store the mapping: The header file containing the symbolic identifier names has a fairly simple structure. Code lines that define a symbolic constant usually have the following layout:
\s* '#' \s* 'define' \s+ <name> \s+ <value> <comment>?
A parser to extract the mappings is not as difficult to implement as it may appear, and should be run at an appropriate time in the build process. Once the mapping has been extracted, it can be stored in a file of arbitrary format, for example an INI file. This file can either be deployed alongside the application, or compiled into the binary image as a resource. At application startup the contents are read back, and used to construct a mapping as described in the previous paragraph. In contrast to the previous solution, parsing the Resource.h file does not require manually updating the code when resources change.
Parse Resource.h and transform the configuration XML file: Like the previous solution this option also requires parsing of the Resource.h file. Using this information, the configuration XML file can then be transformed, substituting the symbolic names for their numeric counterparts prior to deployment. This, too, requires additional work. Once this is done, though, the process can be automated, and the results verified to maintain consistency. At runtime you can simply read the XML and have the numeric identifiers readily available.

The only way your scenario would work is when you distribute Resoutce.h with your application and you have logic to parse Resource.h at startup into a table containing ID_* names and their values.

You can't, the string form is 'lost' at compile time, it's a preprocessor token. You can store the string variations of the menu items: somewhere in your code, have std::map and fill it with values: menu_ids["ID_FILE_NEW"] = ID_FILE_NEW; Then you call RemoveMenu(menu_ids[string_from_xml]);

Related

Can you set multiple (different) tags with the same value?

For some of my projects, I have had to use the viper package to use configuration.
The package requires you to add the mapstructure:"fieldname" to identify and set your configuration object's fields correctly, but I have also had to add other tags for other purposes, leading to something looking like the following :
type MyStruct struct {
MyField string `mapstructure:"myField" json:"myField" yaml:"myField"`
}
As you can see, it is quite redundant for me to write tag:"myField" for each of my tag, so I was wondering if there was any way to "bundle" them up and reduce the verbosity, with something like this mapstructure,json,yaml:"myField"
Or is it simply not possible and you must specify every tag separately ?
Struct tags are arbitrary string literals. Data stored in struct tags may look like whatever you want them to be, but if you don't follow the conventions, you'll have to write your own parser / processing logic. If you follow the conventions, you may use StructTag.Get() and StructTag.Lookup() to easily get tag values.
The conventions do not support "merging" multiple tags, so just write them all out.
The conventions, quoted from reflect.StructTag:
By convention, tag strings are a concatenation of optionally space-separated key:"value" pairs. Each key is a non-empty string consisting of non-control characters other than space (U+0020 ' '), quote (U+0022 '"'), and colon (U+003A ':'). Each value is quoted using U+0022 '"' characters and Go string literal syntax.
See related question: What are the use(s) for tags in Go?

How to redefine -top-level-division in pandoc for each constituent file separately?

I have a bunch of .md constituent content files marked up with headers #, ##, etc.
I want to flexibly compile new documents, with constituent files unchanged but residing at different levels of ToC hierarchy of the final document.
For example:
In compiled-1.pdf, top-level header # Foo from constituent-1.md might end up as "Chapter Foo" --- no change to its level in hierarchy.
However, in compiled-2.pdf, the very same # Foo from the very same constituent-1.md might end up as "Section Foo" --- a demotion to level 2 in the ToC hierarchy of compiled-2.pdf.
In each constituent .md file the top level header is always # and each constituent .md file is always treated as a whole indivisible unit. Therefore, all of a constituent file's headers are to be demoted by the same factor. Also, a constituent file's headers are never promoted.
I feel the problem has to do with re-setting -top-level-divison for each file. How to do it properly (using .yaml configs and make)?
But maybe a better way is to create for each final document a master file that establishes a hierarchy of constituent files with a combination of include ('constituent-1.md') etc. and define ('level', '1') etc. Such master file would then be pre-processed with m4 to search and replace # with ## or ### etc., according to each file's level, and then piped to pandoc.
What's the best approach?
I think these are the right ideas, but not the right tools. Instead of using m4, you might want to check out pandoc filters, especially the built-in Lua filters or the excellent panflute python package. These allow you to manipulate the actual document structure instead of just the text representation.
E.g., this Lua filter demotes all headers in a document:
function Header (header)
header.level = header.level + 1
return header
end
Similarly, you could define your own include statement based on code blocks:
```{include="FILENAME.md"}
```
Include with this filter:
function CodeBlock (cb)
if not cb.attributes.include then
return
end
local fh = io.open(cb.attributes.include)
local blocks = pandoc.read(fh:read('*a')).blocks
f:close()
return blocks
end
It is also possible to apply a filter to only a subset of blocks (requires a little hack):
local blocks = …
local div = pandoc.Div(blocks)
local filtered_blocks = pandoc.walk_block(div, YOUR_FILTER).content
You can combine and extend these building blocks to write your own filter and define your extensions. This way, one can have a main document which includes all your sub-files and shifts header levels as necessary.

How to encode blob names that end with a period?

Azure docs:
Avoid blob names that end with a dot (.), a forward slash (/), or a
sequence or combination of the two.
I cannot avoid such names due to legacy s3 compatibility and so I must encode them.
How should I encode such names?
I don't want to use base64 since that will make it very hard to debug when looking in azure's blob console.
Go has https://golang.org/pkg/net/url/#QueryEscape but it has this limitation:
From Go's implementation of url.QueryEscape (specifically, the
shouldEscape private function), escapes all characters except the
following: alphabetic, decimal digits, '-', '_', '.', '~'.
I don't think there's any universal solution to handle this outside your application scope. Within your application scope, you can do ANY encoding so it falls to personal preference how you like your data to be laid out. There is not "right" way to do this.
Regardless, I believe you should go for these properties:
Conversion MUST be bidirectional and without conflicts in your expected file name space
DO keep file names without ending dots unencoded
with dot-ending files, DO encode just the conflicting dots, keeping the original name readable.
This would keep most (the non-conflicting) files short and with the original intuitive or hopefully meaningful names and should you ever be able to rename or phase out the conflicting files just remove the conversion logic without restructuring all stored data and their urls.
I'll suggest 2 examples for this. Lets suggest you have files:
/someParent/normal.txt
/someParent/extensionless
/someParent/single.
/someParent/double..
Use special subcontainers
You could remove N dots from end of filename and translate them to subcontainer name "dot", "dotdot" etc.
The result urls would like:
/someParent/normal.txt
/someParent/extensionless
/someParent/dot/single
/someParent/dotdot/double
When reading you can remove the "dot"*N folder level and append N dots back to file name.
Obviously this assumes you don't ever need to have such "dot" folders as data themselves.
This is preferred if stored files can come in with any extension but you can make some assumptions on folder structure.
Use discardable artificial extension
Since the conflict is at the end you could just append a never-used dummy extension to given files. For example "endswithdots", but you could choose something more suitable depending on what the expected extensions are:
/someParent/normal.txt
/someParent/extensionless
/someParent/single.endswithdots
/someParent/double..endswithdots
On reading if the file extension is "endswithdots" you remove the "endswithdots" part from end of filename.
This is preferred if your data could have any container structure but you can make some assumptions on incoming extensions.
I would suggest against Base64 or other full-name encoding as it would make file names notably longer and lose any meaningful details the file names may contain.

U-SQL Error - Change the identifier to use at least one lower case letter

I am fairly new to U-SQL and trying to run a U-SQL script in Azure Data Lake Analytics to process a parquet file using the Parquet extractor functionality. I am getting the below error and I don't find a way to get around it.
Error - Change the identifier to use at least one lower case letter. If that is not possible, then escape that identifier (for example: '[ACTIVITY]'), or embed it in a CSHARP() block (e.g CSHARP(ACTIVITY)).
Unfortunately all the different fields generated in the Parquet file are capitalized and I don't want to to escape these identifiers. I have tried if I could wrap the identifier with CSHARP block and it fails as well (E_CSC_USER_RESERVEDKEYWORDASIDENTIFIER: Reserved keyword CSHARP is used as an identifier.) Is there anyway I could extract the parquet file? Thanks for your help!
Code Snippet:
SET ##FeaturePreviews = "EnableParquetUdos:on";
#var1 =
EXTRACT ACTIVITY string,
AUTHOR_NAME string,
AFFLIATION string
FROM "adl://xxx.azuredatalakestore.net/Abstracts/FY2018_028"
USING Extractors.Parquet();
#var2 =
SELECT *
FROM #var1
ORDER BY ACTIVITY ASC
FETCH 5 ROWS;
OUTPUT #var2
TO "adl://xxx.azuredatalakestore.net/Results/AbstractsResults.csv"
USING Outputters.Csv();
Based on your description you try to say
EXTRACT ALLCAPSNAME int FROM "/data.parquet" USING Extractors.Parquet();
In U-SQL, we reserve all caps identifiers so we can add new keywords in the future without invalidating old scripts.
To work around, you just have to quote the name (escape it) like in any other SQL dialect:
EXTRACT [ALLCAPSNAME] int FROM "/data.parquet" USING Extractors.Parquet();
Note that this is not changing the name of the field. It is just the syntactic way to address the field.
Also note, that in most SQL communities, it is considered a best practice to always quote identifiers to avoid reserved keyword clashes.
If all fields in the Parquet file are all caps, you will have to quote them all... In a future update you will be able to say EXTRACT * FROM … for Parquet (and Orc) files, but you still will need to quote the columns when you refer to them explicitly.

How to get the filename and line number of a particular JetBrains.ReSharper.Psi.IDeclaredElement?

I want to write a test framework extension for resharper. The docs for this are here: http://confluence.jetbrains.net/display/ReSharper/Test+Framework+Support
One aspect of this is indicating if a particular piece of code is part of a test. The piece of code is represented as a IDeclaredElement.
Is it possible to get the filename and line number of a piece of code represented by a particular IDeclaredElement?
Following up to the response below:
#Evgeny, thanks for the answer, I wonder if you can clarify one point for me.
Suppose the user has this test open in visual studio: https://github.com/fschwiet/DreamNJasmine/blob/master/NJasmine.Tests/SampleTest.cs
Suppose the user right clicks on line 48, the "player.Resume()" expression.
Will the IDeclaredElement tell me specifically they want to run at line 48? Or is it going to give me a IDeclaredElement corresponding to the entire class, and a filename/line number range for the entire class?
I should play with this myself, but I appreciate tapping into what you already know.
Yes.
The "IDeclaredElement" entity is the code symbol (class, method, variable, etc.). It could be loaded from assembly metadata, it could be declared in source code, it could come from source code implicitly.
You can use
var declarations = declaredElement.GetDeclarations()
to get all AST elements which declares it (this could return multiple declarations for partial class, for example)
Then, for any IDeclaration, you can use
var documentRange = declaration.GetDocumentRange()
if (documentRange.IsValid())
Console.WriteLine ("File: {0} Line:{1}",
DocumentManager.GetInstance(declaration.GetSolution()).GetProjectFile(documentRange.Document).Name,
documentRange.Document.GetCoordsByOffset(documentRange.TextRange.StartOffset).Line
);
By the way, which test framework extension are you developing?
Will the IDeclaredElement tell me specifically they want to run at
line 48?
Once more: IDeclaredElement has no positions in the file. Instead, it's declaration have them.
For every declaration (IDeclaration is a regular AST node) there is range in document which covers it. In my previous example, I used TextRange.StartOffset, though you can use TextRange.EndOffset.
If you need more prcise position in the file, please traverse AST tree, and check the coordinates in the document for specific expression/statement

Resources