With the sentence "場所は多少わかりづらいんですけど、感じのいいところでした。" (i.e. "It is a bit hard to find, but it is a nice place.") using mecab with -d mecab-unidic-neologd the first line of output is:
場所 バショ バショ 場所 名詞-固有名詞-人名-姓
I.e. it says "場所" is a person's surname. Using normal mecab-unidic it more accurately says the "場所" is just a simple noun.
場所 バショ バショ 場所 名詞-普通名詞-一般
My first question is has unidic-neologd replaced all the entries in unidic, or has it simply appended its 3 million proper nouns?
Then, secondly, assuming it is a merger, is it possibly to re-weight the entries, to prefer plain unidic entries a bit more strongly? I.e. I'd love to be getting 中居正広のミになる図書館 and SMAP each recognized as single proper nouns, but I also need it to see that 場所 is always going to mean "place" (except in the cases it is followed by a name suffix such as さん or 様, of course).
References: unidic-neologd
Neologd merges with unidic (or ipadic), which is the reason it keeps "unidic" in the name. If an entry has multiple parts of speech, like 場所, which entry to use is chosen by minimizing cost across the sentence using part-of-speech transitions and, for words in the dictionary, the per-token cost.
If you look in the CSV file that contains neologd dictionary entries you'll see two entries for 場所:
場所,4786,4786,4329,名詞,固有名詞,一般,*,*,*,バショ,場所,場所,バショ,場所,バショ,固,*,*,*,*
場所,4790,4790,4329,名詞,固有名詞,人名,姓,*,*,バショ,場所,場所,バショ,場所,バショ,固,*,*,*,*
And in lex.csv, the default unidic dictionary:
場所,5145,5145,4193,名詞,普通名詞,一般,*,*,*,バショ,場所,場所,バショ,場所,バショ,混,*,*,*,*
The fourth column is the cost. A lower cost item is more likely to be selected, so in this case you can raise the cost for 場所 as a proper noun, though honestly I would just delete it. You can read more about fiddling with cost here (Japanese).
If you want to weight all default unidic entries more strongly, you can modify the neolog CSV file to increase all weights. This is one way to create a file like that:
awk -F, 'BEGIN{OFS=FS}{$4 = $4 * 100; print $0}' neolog.csv > neolog.fix.csv
You will have to remove the original csv file before building (see Note 2 below).
In this particular case, I think you should report this as a bug to the Neologd project.
Note 1: As mentioned above, since which entry is selected depends on the sentence as a whole, it's possible to get the non-proper-noun tag even with the default configuration. Example sentence:
お店の場所知っている?
Note 2: The way the neologd dictionary combines with the default unidic dictionary is based on a subtle aspect of the way Mecab dictionary builds work. Specifically, all CSV files in a dictionary build directory are used when creating the system dictionary. Order isn't specified so it's unclear what happens in the case of collisions.
This feature is mentioned in the Mecab documentation here (Japanese).
Related
I have different commands my program is reading in (i.e., print, count, min, max, etc.). These words can also include a number at the end of them (i.e., print3, count1, min2, max6, etc.). I'm trying to figure out a way to extract the command and the number so that I can use both in my code.
I'm struggling to figure out a way to find the last element in the string in order to extract it, in Smalltalk.
You didn't told which incarnation of Smalltalk you use, so I will explain what I would do in Pharo, that is the one I'm familiar with.
As someone that is playing with Pharo a few months at most, I can tell you the sheer amount of classes and methods available can feel overpowering at first, but the environment actually makes easy to find things. For example, when you know the exact input and output you want, but doesn't know if a method already exists somewhere, or its name, the Finder actually allow you to search by giving a example. You can open it in the world menu, as shown bellow:
By default it seeks selectors (method names) matching your input terms:
But this default is not what we need right now, so you must change the option in the upper right box to "Examples", and type in the search field a example of the input, followed by the output you want, both separated by a ".". The input example I used was the string 'max6', followed by the desired result, the number 6. Pharo then gives me a list of methods that match that:
To get what would return us the text part, you can make a new search, changing the example output from number 6 to the string 'max':
Fortunately there is several built-in methods matching the description of your problem.
There are more elegant ways, I suppose, but you can make use of the fact that String>>#asNumber only parses the part it can recognize. So you can do
'print31' reversed asNumber asString reversed asNumber
to give you 31. That only works if there actually is a number at the end.
This is one of those cases where we can presume the input data has a specific form, ie, the only numbers appear at the end of the string, and you want all those numbers. In that case it's not too hard to do, really, just:
numText := 'Kalahari78' select: [ :each | each isDigit ].
num := numText asInteger. "78"
To get the rest of the string without the digits, you can just use this:
'Kalahari78' withoutTrailingDigits. "Kalahari"6
As some of the Pharo "OGs" pointed out, you can take a look at the String class (just type CMD-Return, type in String, hit Return) and you will find an amazing number of methods for all kinds of things. Usually you can get some ideas from those. But then there are times when you really just need an answer!
I am trying to build a FP tree, and feeling quite confused which data structure I should use to record the prefix path and its occurrence. The prefix path is a sequence recording item set like ('coffee','milk','bear') and its occurrence is an int number. I post two requirements of the data structure below so that you don't need to go deep into FP-tree:
The occurrence of prefix path need to be searched frequently, so maybe dict like {prefix_path : occurrence} is the best way to store them.
The prefix path need to be updated(re-rank and filter) in a conditional FP tree.
I have searched other's work in Github, and found out people would use {tuple(['coffee','milk','bear']):occurrence} or {frozenset(['coffee','milk','bear']):occurrence} to do so. However, when prefix path update, they need to change tuple or frozensetinto list then change it back. I think this is quite not pythonic.
I am wondering if there is a better way to store prefix path with its occurrence.
Azure docs:
Avoid blob names that end with a dot (.), a forward slash (/), or a
sequence or combination of the two.
I cannot avoid such names due to legacy s3 compatibility and so I must encode them.
How should I encode such names?
I don't want to use base64 since that will make it very hard to debug when looking in azure's blob console.
Go has https://golang.org/pkg/net/url/#QueryEscape but it has this limitation:
From Go's implementation of url.QueryEscape (specifically, the
shouldEscape private function), escapes all characters except the
following: alphabetic, decimal digits, '-', '_', '.', '~'.
I don't think there's any universal solution to handle this outside your application scope. Within your application scope, you can do ANY encoding so it falls to personal preference how you like your data to be laid out. There is not "right" way to do this.
Regardless, I believe you should go for these properties:
Conversion MUST be bidirectional and without conflicts in your expected file name space
DO keep file names without ending dots unencoded
with dot-ending files, DO encode just the conflicting dots, keeping the original name readable.
This would keep most (the non-conflicting) files short and with the original intuitive or hopefully meaningful names and should you ever be able to rename or phase out the conflicting files just remove the conversion logic without restructuring all stored data and their urls.
I'll suggest 2 examples for this. Lets suggest you have files:
/someParent/normal.txt
/someParent/extensionless
/someParent/single.
/someParent/double..
Use special subcontainers
You could remove N dots from end of filename and translate them to subcontainer name "dot", "dotdot" etc.
The result urls would like:
/someParent/normal.txt
/someParent/extensionless
/someParent/dot/single
/someParent/dotdot/double
When reading you can remove the "dot"*N folder level and append N dots back to file name.
Obviously this assumes you don't ever need to have such "dot" folders as data themselves.
This is preferred if stored files can come in with any extension but you can make some assumptions on folder structure.
Use discardable artificial extension
Since the conflict is at the end you could just append a never-used dummy extension to given files. For example "endswithdots", but you could choose something more suitable depending on what the expected extensions are:
/someParent/normal.txt
/someParent/extensionless
/someParent/single.endswithdots
/someParent/double..endswithdots
On reading if the file extension is "endswithdots" you remove the "endswithdots" part from end of filename.
This is preferred if your data could have any container structure but you can make some assumptions on incoming extensions.
I would suggest against Base64 or other full-name encoding as it would make file names notably longer and lose any meaningful details the file names may contain.
Does any one know how to generate the possible misspelling ?
Example : unemployment
- uemployment
- onemploymnet
-- etc.
If you just want to generate a list of possible misspellings, you might try a tool like this one. Otherwise, in SAS you might be able to use a function like COMPGED to compute a measure of the similarity between the string someone entered, and the one you wanted them to type. If the two are "close enough" by your standard, replace their text with the one you wanted.
Here is an example that computes the Generalized Edit Distance between "unemployment" and a variety of plausible mispellings.
data misspell;
input misspell $16.;
length misspell string $16.;
retain string "unemployment";
GED=compged(misspell, string,'iL');
datalines;
nemployment
uemployment
unmployment
uneployment
unemloyment
unempoyment
unemplyment
unemploment
unemployent
unemploymnt
unemploymet
unemploymen
unemploymenyt
unemploymenty
unemploymenht
unemploymenth
unemploymengt
unemploymentg
unemploymenft
unemploymentf
blahblah
;
proc print data=misspell label;
label GED='Generalized Edit Distance';
var misspell string GED;
run;
Essentially you are trying to develop a list of text strings based on some rule of thumb, such as one letter is missing from the word, that a letter is misplaced into the wrong spot, that one letter was mistyped, etc. The problem is that these rules have to be explicitly defined before you can write the code, in SAS or any other language (this is what Chris was referring to). If your requirement is reduced to this one-wrong-letter scenario then this might be managable; otherwise, the commenters are correct and you can easily create massive lists of incorrect spellings (after all, all combinations except "unemployment" constitute a misspelling of that word).
Having said that, there are many ways in SAS to accomplish this text manipulation (rx functions, some combination of other text-string functions, macros); however, there are probably better ways to accomplish this. I would suggest an external Perl process to generate a text file that can be read into SAS, but other programmers might have better alternatives.
If you are looking for a general spell checker, SAS does have proc spell.
It will take some tweaking to get it working for your situation; it's very old and clunky. It doesn't work well in this case, but you may have better results if you try and use another dictionary? A Google search will show other examples.
filename name temp lrecl=256;
options caps;
data _null_;
file name;
informat name $256.;
input name &;
put name;
cards;
uemployment
onemploymnet
;
proc spell in=name
dictionary=SASHELP.BASE.NAMES
suggest;
run;
options nocaps;
I have a LaTeX document where I'd like the numbering of floats (tables and figures) to be in one numeric sequence from 1 to x rather than two sequences according to their type. I'm not using lists of figures or tables either and do not need to.
My documentclass is report and typically my floats have captions like this:
\caption{Breakdown of visualisations created.}
\label{tab:Visualisation_By_Types}
A quick way to do it is to put \addtocounter{table}{1} after each figure, and \addtocounter{figure}{1} after each table.
It's not pretty, and on a longer document you'd probably want to either include that in your style sheet or template, or go with cristobalito's solution of linking the counters.
The differences between the figure and table environments are very minor -- little more than them using different counters, and being maintained in separate sequences.
That is, there's nothing stopping you putting your {tabular} environments in a {figure}, or your graphics in a {table}, which would mean that they'd end up in the same sequence. The problem with this case (as Joseph Wright notes) is that you'd have to adjust the \caption, so that doesn't work perfectly.
Try the following, in the preamble:
\makeatletter
\newcounter{unisequence}
\def\ucaption{%
\ifx\#captype\#undefined
\#latex#error{\noexpand\ucaption outside float}\#ehd
\expandafter\#gobble
\else
\refstepcounter{unisequence}% <-- the only change from default \caption
\expandafter\#firstofone
\fi
{\#dblarg{\#caption\#captype}}%
}
\def\thetable{\#arabic\c#unisequence}
\def\thefigure{\#arabic\c#unisequence}
\makeatother
Then use \ucaption in your tables and figures, instead of \caption (change the name ad lib). If you want to use this same sequence in other environments (say, listings?), then define \the<foo> the same way.
My earlier attempt at this is in fact completely broken, as the OP spotted: the getting-the-lof-wrong is, instead of being trivial and only fiddly to fix, absolutely fundamental (ho, hum).
(For the afficionados, it comes about because \advance commands are processed in TeX's gut, but the content of the .lof, .lot, and .aux files is fixed in TeX's mouth, at expansion time, thus what was written to the files was whatever random value \#tempcnta had at the point \caption was called, ignoring the \advance calculations, which were then dutifully written to the file, and then ignored. Doh: how long have I know this but never internalised it!?)
Dutiful retention of earlier attempt (on the grounds that it may be instructively wrong):
No problem: try putting the following in the preamble:
\makeatletter
\def\tableandfigurenum{\#tempcnta=0
\advance\#tempcnta\c#figure
\advance\#tempcnta\c#table
\#arabic\#tempcnta}
\let\thetable\tableandfigurenum
\let\thefigure\tableandfigurenum
\makeatother
...and then use the {table} and {figure} environments as normal. The captions will have the correct 'Table/Figure' text, but they'll share a single numbering sequence.
Note that this example gets the numbers wrong in the listoffigures/listoftables, but (a) you say you don't care about that, (b) it's fixable, though probably mildly fiddly, and (c) life is hard!
I can't remember the syntax, but you're essentially looking for counters. Have a look here, under the custom floats section. Assign the counters for both tables and figures to the same thing and it should work.
I'd just use one type of float (let's say 'figure'), then use the caption package to remove the automatically added "Figure" text from the caption and deal with it by hand.