How to encode blob names that end with a period? - azure

Azure docs:
Avoid blob names that end with a dot (.), a forward slash (/), or a
sequence or combination of the two.
I cannot avoid such names due to legacy s3 compatibility and so I must encode them.
How should I encode such names?
I don't want to use base64 since that will make it very hard to debug when looking in azure's blob console.
Go has https://golang.org/pkg/net/url/#QueryEscape but it has this limitation:
From Go's implementation of url.QueryEscape (specifically, the
shouldEscape private function), escapes all characters except the
following: alphabetic, decimal digits, '-', '_', '.', '~'.

I don't think there's any universal solution to handle this outside your application scope. Within your application scope, you can do ANY encoding so it falls to personal preference how you like your data to be laid out. There is not "right" way to do this.
Regardless, I believe you should go for these properties:
Conversion MUST be bidirectional and without conflicts in your expected file name space
DO keep file names without ending dots unencoded
with dot-ending files, DO encode just the conflicting dots, keeping the original name readable.
This would keep most (the non-conflicting) files short and with the original intuitive or hopefully meaningful names and should you ever be able to rename or phase out the conflicting files just remove the conversion logic without restructuring all stored data and their urls.
I'll suggest 2 examples for this. Lets suggest you have files:
/someParent/normal.txt
/someParent/extensionless
/someParent/single.
/someParent/double..
Use special subcontainers
You could remove N dots from end of filename and translate them to subcontainer name "dot", "dotdot" etc.
The result urls would like:
/someParent/normal.txt
/someParent/extensionless
/someParent/dot/single
/someParent/dotdot/double
When reading you can remove the "dot"*N folder level and append N dots back to file name.
Obviously this assumes you don't ever need to have such "dot" folders as data themselves.
This is preferred if stored files can come in with any extension but you can make some assumptions on folder structure.
Use discardable artificial extension
Since the conflict is at the end you could just append a never-used dummy extension to given files. For example "endswithdots", but you could choose something more suitable depending on what the expected extensions are:
/someParent/normal.txt
/someParent/extensionless
/someParent/single.endswithdots
/someParent/double..endswithdots
On reading if the file extension is "endswithdots" you remove the "endswithdots" part from end of filename.
This is preferred if your data could have any container structure but you can make some assumptions on incoming extensions.
I would suggest against Base64 or other full-name encoding as it would make file names notably longer and lose any meaningful details the file names may contain.

Related

How to redefine -top-level-division in pandoc for each constituent file separately?

I have a bunch of .md constituent content files marked up with headers #, ##, etc.
I want to flexibly compile new documents, with constituent files unchanged but residing at different levels of ToC hierarchy of the final document.
For example:
In compiled-1.pdf, top-level header # Foo from constituent-1.md might end up as "Chapter Foo" --- no change to its level in hierarchy.
However, in compiled-2.pdf, the very same # Foo from the very same constituent-1.md might end up as "Section Foo" --- a demotion to level 2 in the ToC hierarchy of compiled-2.pdf.
In each constituent .md file the top level header is always # and each constituent .md file is always treated as a whole indivisible unit. Therefore, all of a constituent file's headers are to be demoted by the same factor. Also, a constituent file's headers are never promoted.
I feel the problem has to do with re-setting -top-level-divison for each file. How to do it properly (using .yaml configs and make)?
But maybe a better way is to create for each final document a master file that establishes a hierarchy of constituent files with a combination of include ('constituent-1.md') etc. and define ('level', '1') etc. Such master file would then be pre-processed with m4 to search and replace # with ## or ### etc., according to each file's level, and then piped to pandoc.
What's the best approach?
I think these are the right ideas, but not the right tools. Instead of using m4, you might want to check out pandoc filters, especially the built-in Lua filters or the excellent panflute python package. These allow you to manipulate the actual document structure instead of just the text representation.
E.g., this Lua filter demotes all headers in a document:
function Header (header)
header.level = header.level + 1
return header
end
Similarly, you could define your own include statement based on code blocks:
```{include="FILENAME.md"}
```
Include with this filter:
function CodeBlock (cb)
if not cb.attributes.include then
return
end
local fh = io.open(cb.attributes.include)
local blocks = pandoc.read(fh:read('*a')).blocks
f:close()
return blocks
end
It is also possible to apply a filter to only a subset of blocks (requires a little hack):
local blocks = …
local div = pandoc.Div(blocks)
local filtered_blocks = pandoc.walk_block(div, YOUR_FILTER).content
You can combine and extend these building blocks to write your own filter and define your extensions. This way, one can have a main document which includes all your sub-files and shifts header levels as necessary.

Recommended data structure to store a changeable sequence with a number

I am trying to build a FP tree, and feeling quite confused which data structure I should use to record the prefix path and its occurrence. The prefix path is a sequence recording item set like ('coffee','milk','bear') and its occurrence is an int number. I post two requirements of the data structure below so that you don't need to go deep into FP-tree:
The occurrence of prefix path need to be searched frequently, so maybe dict like {prefix_path : occurrence} is the best way to store them.
The prefix path need to be updated(re-rank and filter) in a conditional FP tree.
I have searched other's work in Github, and found out people would use {tuple(['coffee','milk','bear']):occurrence} or {frozenset(['coffee','milk','bear']):occurrence} to do so. However, when prefix path update, they need to change tuple or frozensetinto list then change it back. I think this is quite not pythonic.
I am wondering if there is a better way to store prefix path with its occurrence.

Possible to balance unidic vs. unidic-neologd?

With the sentence "場所は多少わかりづらいんですけど、感じのいいところでした。" (i.e. "It is a bit hard to find, but it is a nice place.") using mecab with -d mecab-unidic-neologd the first line of output is:
場所 バショ バショ 場所 名詞-固有名詞-人名-姓
I.e. it says "場所" is a person's surname. Using normal mecab-unidic it more accurately says the "場所" is just a simple noun.
場所 バショ バショ 場所 名詞-普通名詞-一般
My first question is has unidic-neologd replaced all the entries in unidic, or has it simply appended its 3 million proper nouns?
Then, secondly, assuming it is a merger, is it possibly to re-weight the entries, to prefer plain unidic entries a bit more strongly? I.e. I'd love to be getting 中居正広のミになる図書館 and SMAP each recognized as single proper nouns, but I also need it to see that 場所 is always going to mean "place" (except in the cases it is followed by a name suffix such as さん or 様, of course).
References: unidic-neologd
Neologd merges with unidic (or ipadic), which is the reason it keeps "unidic" in the name. If an entry has multiple parts of speech, like 場所, which entry to use is chosen by minimizing cost across the sentence using part-of-speech transitions and, for words in the dictionary, the per-token cost.
If you look in the CSV file that contains neologd dictionary entries you'll see two entries for 場所:
場所,4786,4786,4329,名詞,固有名詞,一般,*,*,*,バショ,場所,場所,バショ,場所,バショ,固,*,*,*,*
場所,4790,4790,4329,名詞,固有名詞,人名,姓,*,*,バショ,場所,場所,バショ,場所,バショ,固,*,*,*,*
And in lex.csv, the default unidic dictionary:
場所,5145,5145,4193,名詞,普通名詞,一般,*,*,*,バショ,場所,場所,バショ,場所,バショ,混,*,*,*,*
The fourth column is the cost. A lower cost item is more likely to be selected, so in this case you can raise the cost for 場所 as a proper noun, though honestly I would just delete it. You can read more about fiddling with cost here (Japanese).
If you want to weight all default unidic entries more strongly, you can modify the neolog CSV file to increase all weights. This is one way to create a file like that:
awk -F, 'BEGIN{OFS=FS}{$4 = $4 * 100; print $0}' neolog.csv > neolog.fix.csv
You will have to remove the original csv file before building (see Note 2 below).
In this particular case, I think you should report this as a bug to the Neologd project.
Note 1: As mentioned above, since which entry is selected depends on the sentence as a whole, it's possible to get the non-proper-noun tag even with the default configuration. Example sentence:
お店の場所知っている?
Note 2: The way the neologd dictionary combines with the default unidic dictionary is based on a subtle aspect of the way Mecab dictionary builds work. Specifically, all CSV files in a dictionary build directory are used when creating the system dictionary. Order isn't specified so it's unclear what happens in the case of collisions.
This feature is mentioned in the Mecab documentation here (Japanese).

Converting an ASTNode into code

How does one convert an ASTNode (or at least a CompilationUnit) into a valid piece of source code?
The documentation says that one shouldn't use toString, but doesn't mention any alternatives:
Returns a string representation of this node suitable for debugging purposes only.
CompilationUnits have rewrite, but that one does not work for ASTs created by hand.
Formatting options would be nice to have, but I'd basically be satisfied with anything that turns arbitrary ASTNodes into semantically equivalent source code.
In JDT the normal way for AST manipulation is to start with a basic CompilationUnit and then use a rewriter to add content. Then ASTRewriteAnalyzer / ASTRewriteFormatter should take care of creating formatted source code. Creating a CU just containing a stub type declaration shouldn't be hard, so that's one option.
If that doesn't suite your needs, you may want to experiement with directly calling the internal org.eclipse.jdt.internal.core.dom.rewrite.ASTRewriteFlattener.asString(ASTNode, RewriteEventStore). If not editing existing files, you may probably ignore the events collected in the RewriteEventStore, just use the returned String.

sas generate all possible miss spelling

Does any one know how to generate the possible misspelling ?
Example : unemployment
- uemployment
- onemploymnet
-- etc.
If you just want to generate a list of possible misspellings, you might try a tool like this one. Otherwise, in SAS you might be able to use a function like COMPGED to compute a measure of the similarity between the string someone entered, and the one you wanted them to type. If the two are "close enough" by your standard, replace their text with the one you wanted.
Here is an example that computes the Generalized Edit Distance between "unemployment" and a variety of plausible mispellings.
data misspell;
input misspell $16.;
length misspell string $16.;
retain string "unemployment";
GED=compged(misspell, string,'iL');
datalines;
nemployment
uemployment
unmployment
uneployment
unemloyment
unempoyment
unemplyment
unemploment
unemployent
unemploymnt
unemploymet
unemploymen
unemploymenyt
unemploymenty
unemploymenht
unemploymenth
unemploymengt
unemploymentg
unemploymenft
unemploymentf
blahblah
;
proc print data=misspell label;
label GED='Generalized Edit Distance';
var misspell string GED;
run;
Essentially you are trying to develop a list of text strings based on some rule of thumb, such as one letter is missing from the word, that a letter is misplaced into the wrong spot, that one letter was mistyped, etc. The problem is that these rules have to be explicitly defined before you can write the code, in SAS or any other language (this is what Chris was referring to). If your requirement is reduced to this one-wrong-letter scenario then this might be managable; otherwise, the commenters are correct and you can easily create massive lists of incorrect spellings (after all, all combinations except "unemployment" constitute a misspelling of that word).
Having said that, there are many ways in SAS to accomplish this text manipulation (rx functions, some combination of other text-string functions, macros); however, there are probably better ways to accomplish this. I would suggest an external Perl process to generate a text file that can be read into SAS, but other programmers might have better alternatives.
If you are looking for a general spell checker, SAS does have proc spell.
It will take some tweaking to get it working for your situation; it's very old and clunky. It doesn't work well in this case, but you may have better results if you try and use another dictionary? A Google search will show other examples.
filename name temp lrecl=256;
options caps;
data _null_;
file name;
informat name $256.;
input name &;
put name;
cards;
uemployment
onemploymnet
;
proc spell in=name
dictionary=SASHELP.BASE.NAMES
suggest;
run;
options nocaps;

Resources