koRpus--tokenize command on large folder of word files - readability

I have made some headway in getting koRpus to analyze my data, but there are lingering problems.
The 'tokenize' command seems to work--kind of. I run the following line of code:
word <- tokenize("/Users/gdballingrud/Desktop/WPSCASES 1/", lang="en")
And it produces a 'Large krp.text' object. However, the size of the file (5.6 MB) is far less than the size of the file I reference in the code (260 MB). Further, when I use the 'readability' command to generate text analysis scores (like so:)
all <- readability(word)
It returns one readability score for the whole krp.text object (one per readability measure, I mean).
I need readability scores on each Word file I have in my folder, and I need to use koRpus (others like quanteda don't generate some of the readability measures that I need, like LIX and kuntzsch's text-redundandz-index).
Is anyone experienced enough with koRpus to point out what I have done wrong? The recurring problems are: 1) getting the tokenize command to recognize each file in my folder, and 2) getting readability scores for each separate file.
Thanks,
Gordon

Related

GPT-J and GPT-Neo generate too long sentences

I trained a GPT-J and GPT-Neo models (fine tuning) on my texts and am trying to generate new text. But very often the sentences are very long (sometimes 300 characters each), although in the dataset the sentences are of normal length (50-100 characters usually). I tried a lot of things, changed, adjusted the temperature, top_k, but still half of the results with long phrases and I neen more short.
What can you try?
Here are long examples of generated results:
The support system that they have built has allowed us as users who
are not code programmers or IT administrators some ability to create
our own custom solutions without needing much programming experience
ourselves from scratch!
All it requires are documents about your inventory process but
I've found them helpful as they make sure you do everything right for
maximum efficiency because their knowledge base keeps reminding me
there's new ways i can be doing some things wrong since upgrading my
license so even though its good at finding errors with documentation
like an auditor may bring up later downline someone else might benefit
if those files dont exist anymore after one year when upgrades renews
automatically!
With all GPT models you can specify the "max_length" parameter during generation. This will force the model to generate an amount of tokens equal to max_length. You could also play with num_return_sequences and use a helper function to choose the shortest sequence.
Example:
output = model.generate(input_ids, do_sample=True, top_k=50, max_length=100, top_p=0.95, num_return_sequences=1)
These large language models are trained on massive amounts of data, and fine-tuning them can take patience as they learn to adapt to what you're feeding it. Try different things - adjust your training data format, try different samples, use a pre-prompt during generation to guide the model, etc.. A model like GPT-J does a mind-numbingly large amount of calculations just to spit out a single word, so it is hard to predict what exactly is causing it to say one thing over another.

Microsoft translator engine customization: parallel txt files

I am trying to perform some NMT engine customization for Japanese but I am having some difficulties uploading parallel txt files. I've gathered 10k parallel sentences and I've put them into two txt files:
As the guide suggested, I've also been careful to remove sentences containing the \n and \r characters in them, but upon uploading I get the following:
What's wrong?
We display the sentence counts because the model training engine operates at the sentence level. The expected format of the txt parallel file set is one sentence for each line. During the upload process we do run a sentence breaker which identifies end of sentence markers and breaks accordingly. This is why the count of sentences do not always match the count of lines. Sentences are the units we operate on, not lines of the input file. That's why we focus on sentences rather than lines.
This is also why we suggest removing newline characters within sentences. The newline is considered an end of sentence marker, so having newlines within a sentence creates a false sentence break.
In response to your second concern, we do run a sentence aligning process on most data that is submitted. If there is an inconsistent number of sentences in the uploaded parallel files we can usually get most of the sentence pairs, as long as the sentences are fairly close.
After some "debugging" I've noticed that the number shown in the portal is the number of sentences (instead of lines, my bad!). I find it kind of confusing (and not really useful in my opinion). What would be the usefulness of displaying this information?
In addition, I've noticed that there is no warning if you upload one file containing less lines than the second file (which would make the parallel files not parallel anymore - the whole point of parallel files is to have X lines in the source file, and X lines on the target file). It would be helpful if at least a warning was shown to prevent mistakes (if you use parallel files and len(f1)!=len(f2) it's a great indicator that something is off)

Using "seed" based math to recreate application instances

Okay so I was thinking today about Minecraft a game which so many of you are so familiar with, I'm sure and while my question isn't directly related to the game I find it much simply to describe my question using the game as an example.
My question is, is there any way a type of "seed" or string of characters can be used to recreate an instance of a program (not in the literal programming sense) by storing a code which when re-entered into this program as a string at run-time, could recreate the data it once held again, in fields, text boxes, canvases, for example, exactly as it was.
As I understand it, Minecraft takes the string of ASCII characters you enter, all which truly are numbers, and performs a series of operations on it which evaluate to some type of hash or number which is finite... this number (again as I understand) is the representation of that string you entered. So it makes sense that because a string when parsed by this algorithm will always evaluate to the same hash. 1 + 1 will always = 2 so a seeds value must always equal that seeds value in the end. And in doing so you have the ability to replicate exactly, worlds, by entering this sort of key which is evaluated the same on every machine.
Now, if we can exactly replicate worlds like this this is it possible to bring it into a more abstract concept like the following?...
Say you have an application, like Microsoft Word. Word saved the data you have entered as a file on your hard drive it holds formatting data, the strings you've entered, the format of the file... all that on a physical file... Now imagine if when you entered your essay into Word instead of saving it and bringing your laptop to school you instead click on parse and instead of creating a file, you are given a hash code... Now you goto school you know you have to print it. so you log onto the computer and open Word... Now instead of open there is an option now called evaluate you click it and enter the hash your other computer formulated and it creates the exact essay you have written.
Is this possible, and if so are there obvious implementations of this i simply am not thinking of or are just so seemingly part of everyday I don't think recognize it? And finally... if possible, what methods and algorithms would go into such a thing?
[EDIT]
I had to do some research on the anatomy of a seed and I think this explains it well
The limit is 32 characters or for a
numeric seed, 19 digits plus the minus sign.
Numeric seeds can range from -9223372036854775808 to
9223372036854775807 which is a total of 18446744073709551616 Text
strings entered will be "hashed" to one of the numeric seeds in the
above range. The "Seed for the World Generator" window only allows 32
characters to be entered and will not show or use any more than that."
BUT looking back on it lossless compression IS EXACTLY what I was
describing after re-reading the wiki page and remembering that (you
are very correct) the seed only partakes in the generation, the final
data is stores as a "physical" file on the HDD (which again, you are correct) is raw uncompressed data in a file
So in retrospect, I believe I was describing lossless compression, trying in my mind to figure out how the seed was able to replicate the exact same world, forgetting the seed was only responsible for generating the code, not the saving or compression of it.
So thank you for your help guys! It's really appreciated I believe we can call this one solved!
There are several possibilities to achieve this "string" that recovers your data. However they're not all applicable depending on the context.
An actual seed, which initializes for example a peudo-random number generator, then allows to recreate the same sequence of pseudo-random numbers (see this question).
This is possibly similar to what Minecraft relies on, because the whole process of how to create a world based on some choices (possibly pseudo-random choices) is known in advance. Even if we pretend that we have random numbers, computers are actually deterministic, which makes this possible.
If your document were generated randomly then this would be applicable: with the same seed, the same gibberish comes out.
Some key-value dictionary, or hash map. Then the values have to be accessible by both sides and the string is the key that allows to retrieve the value.
Think for example of storing your word file on an online server, then your key is the URL linking to your file.
Compressing all the information that is in your data into the string. This is much harder, and there are strong limits due to the entropy of the data. See Shannon's source coding theorem for example.
You would be better off (as in, it would be easier) to just compress your file with a usual algorithm (zip or 7z or something else), rather than reimplementing it yourself, especially as soon as your document starts having fancy things (different styles, tables, pictures, unusual characters...)
With the simple hypothesis of 27 possible characters (26 letters and the space), Shannon himself shows in Prediction and Entropy of Printed English (Bell System Technical Journal, 30: 1. January 1951 pp 50-64, online version) that there is about 2.14 bits of entropy per letter in English. That's about 550 characters encoded with your 32 character string.
While this is significantly better than the 8 bits we use for each ASCII character, it also shows it is very likely to be impossible to encode a document in English in less than a fourth of its size. Then you'd still have to add punctuation, and all the rest of the fuss.

How to quickly infer start/end time of files that only show start time?

I have a huge list of video files from a webcam that have that look like this:
video_123
video_456
video_789
...
Where each number (123, 456, and 789) represents the start time of the file in seconds since epoch. The files are created based on file size and are not always the same duration. There may also be gaps in the files (eg camera goes down for an hour). It is a custom file format that I cannot change.
I have a tool that can extract out portions of the video given a time range and a set of files. However, it will run MUCH faster if I only give the tool the files that have frames within the given range. It's very costly to determine the duration of each file. Instead, I'd like to use the start timestamp to rule out most files. For example, if I wanted video for 500-600, I know video_123 will not be needed because video_456 is larger. Also, video_789 is larger than 600 so it will not be needed either.
I could do an ls and iterate through each file, converting the timestamp to an int and comparing until we hit a file bigger than the desired range. I have a LOT of files and this is slow. Is there a faster method? I was thinking of having some sort of binary-tree that could get log2n search time and already have the timestamps parsed out. I am doing most of this work in bash and would prefer to use simple, common tools like grep, awk, etc. However, I will consider Perl or some other large scripting language if there is a compelling reason.
If you do several searchs with the files, you can pre-process the files, in the sense of loading them into a bash array (note, bash, not sh), order them, and then do a binary search. Assume for a second that the name of the file is just the time tag, this will ease the examples (you can always do ${variable/video_/} to remove the prefix.)
First, you can use an array to load all the files sorted:
files=(`echo * | sort -n`)
Then implement the binary search (just a sketch, searching for the pos $min-$max):
nfiles=${#files[*]}
nfiles2=`expr $nfiles / 2`
if test ${files[$nfiles2]} -gt $max
then
nfiles2=`expr $nfiles2 - $nfiles2/2`
else
#check $min, etc.
fi
And so on. Searching several times once you have the files ordered in the array would make faster lookups.
Because of a quirk of UNIX design, there is no way to search for the name of a file in a directory other than stepping through the filenames one by one. So if you keep all your files in one directory, you're not going to get much faster than using ls.
That said, if you're willing to move your files around, you could turn your flat directory into a tree by splitting on the most significant digits. Instead of:
video_12301234
video_12356789
video_12401234
video_13579123
You could have:
12/video_12301234
12/video_12356789
12/video_12401234
13/video_13579123
or even:
12/30/video_12301234
12/35/video_12356789
12/40/video_12401234
13/57/video_13579123
For best results with this method, you'll want to have your files named with leading zeros so the numbers are all the same length.

How to test a program processing large amounts of data stored in an unpredictable format

What I have to do
I'm trying to manipulate some rather large amounts of data stored in Excel files (one of the workbooks has as much as 150 spreadsheets). The result of these manipulations may yield approximately 800.000 rows in a database table.
The problem
Data stored in the spreadsheets has unpredictable format. The company that generated these spreadsheets had no fixed/documented format for exporting these files, and sometimes erroneous data appear. For example most of the years are represented like "2009" but there are cases where a year is represented as "20". Other example, data is not really normalized in these files, so I use separators to split the values of certain cells. Sometimes these separators change.
There are things like these that I couldn't predict and I only discovered them only after running an already evolved version of my program over a pretty large part of the available data.
The question
How can one test the correctness of a program in such a situation? Or rather, how to achieve a pretty stable version of the product without running it over the whole available data?
Shall I take a defensive approach and throw exceptions whenever some kind of unexpected issue arises? Then the main loop of the program may catch and log them and continue with the available data? This would yield some processed data, but that means that on a subsequent iteration of the program I have to have checks for what's already inside the database from previous iterations (which I don't really like).
What's your opinion? How would you tackle this problem?
If there is no specification for what the format of the data is, then anything is acceptable.
If not, then there is either an explicit or implicit specification of the data. I would try and nail this down right now. If you can't get an explicit enough definition of the data to write your program so that it can be expected to run without error, then I would say you are taking a very large risk in causing some serious damage depending on how this data is being used.
You should write your program so that it either throws an exception or logs an error whenever running across data that does not meet the specification. Then, run the program on PART of the available data until it runs without exception. This can be viewed as a training set for the development of your program. Then, use some of the saved data to use as a TEST set. This will give you an estimate of how many exceptions/errors your program will generate in production.
Overfitting is a common machine learning concept, but it is useful to other tasks such as this - program development. It is surprising to me how developers can write a bunch of unit tests, code their application to perform well on it, and then expect similar or bug-free performance in production.
If you're not willing to take all these steps (i.e. run your code on essentially all of the data -- since the test set is also making use of the data) then I would say the task is too large to do.
As an aside, rather than creating a definition of a format that is very strange and peculiar to account for all the "errors" in the current data, you might want to create a new, normalized (in the sense these things are simplified away) specification for the data, and then write a "faulty document patcher" that can be run on faulty documents to fix the data.
If the application generating the data is still in production, then you might need to go to the developers of this application to get a buy in on the new spec. Once you have that, you can then start logging bugs against their application, so hopefully the faulty document patcher can be retired.
More likely, I'm guessing that the software developers are long gone, no one understands the code anymore, if it is even running at all.
How can one test the correctness of a program in such a situation? Or rather, how to achieve a pretty stable version of the product without running it over the whole available data?
For every single data type I would set reasonable constraints on the values that it is allowed to be.
If a cell violates these constraints then throw an exception containing the piece of data it failed on and its data type. If a piece of data violated its constraints you can modify the source to include the additional constraints required for that piece of data, and a conversion method to make it uniform.
To give an example on the date you gave, initially a date would have the constraint that it could be only four digits. When the program came across the "20" it would throw an exception.
Then you could go and allow two digit dates, and a method to convert the two-digit dates into a four digit one to allow further processing.
One question is, will you run your program more than once? From your question it sounds possible you only want to run it once, and then you will then work with the data in the database.
In which case you can be very defensive - throw exceptions whenever unexpected data appears. Run the program repeatedly on ever-larger sets of the data. Initially, solve any exceptions by altering the code, as it's a good rule of thumb that the exceptions you find first are going to be common. You might want to empty the output database between runs.
Later on, you will be finding rare exceptions that might only occur a couple of times in the input. Just solve these by hand and insert the corresponding rows in the database yourself. Or write another small program that reads your exception information and inserts the new rows, rather than running your whole big program again.
Typically for this sort of thing I do these as #MarkJ suggested, and I encode the whole thing in unit tests.
So I compose a small datafile that at first contains only a few rows of normal data. That's unit test number 1.
Then I take a quick visual scan of some of the data to spot any obvious exceptions. Unit tests 2 through n.
Finally, I write parser code until it passes all unit tests, and throws and logs exceptions for all un-managed data.
I then use these oddball bits of data to make new unit tests, and improve the parser until it can pass those too.
Although sometimes accommodating some really strange bit of data adds more parser complexity than it's worth, and I'll just log the exception, dump it, and move on. This is a matter of professional judgment.
How about processing every piece of data (so you don't have to check for dupes). Those that pass go into the database. The exceptions go into an exception file. The user can open the exception file and make corrections/modifications to the data. Then they can run your program on the exception file.
This will isolate unhandled data for the user to correct and prevent you from processing the same data twice (or more).

Resources