How to generate large text in OpenAi by ignoring stop sequence? - openai-api

I wanted to generate large text in openAi but the problem arise with the stop sequence. It only generates a few sentences for a topic. Is there any way I can bypass the stop sequence for openAI?

You should increase Maximum length parameter for this, the higher the value you use the larger text you will get.
max_tokens: integer, Optional, Defaults to 16.
The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
See the official docs for more information.

Related

Size of input allowed in AllenNLP - Predict when using a Predictor

does anyone has an idea what is the input text size limit that can be passed to the
predict(passage, question) method of the AllenNLP Predictors.
I have tried with passage of 30-40 sentences, which is working fine. But eventually it is not working for me when I am passing some significant amount of text around 5K statement.
Which model are you using? Some models truncate the input, others try to handle arbitrary length input using a sliding window approach. With the latter, the limit will depend on the memory available on your system.

Scikit learn models gives weight to random variable? Should I remove features with less importance?

I do some feature selection by removing correlated variables and backwards elimination. However, after all that is done as a test I threw in a random variable, and then trained logistic regression, random forest and XGBoost. All 3 models have the feature importance of the random feature as greater than 0. First, how can that be? Second, all models have it ranked toward the bottom, but it's not the lowest feature. Is this a valid step for another round of feature selection -i.e. remove all those who score below the random feature?
The random feature is created with
model_data['rand_feat'] = random.randint(100, size=(model_data.shape[0]))
This can happen, What random is the number you sample, but this random sampling can still generate a pattern by chance. I dont know whether you are doing classification or regression but lets consider the simple example of binary classification. We have class 1 and 0 and 1000 data point from each. When you sample a random number for each data point, it can happen that for example a majority of class 1 gets some value higher than 50, whereas majority of class 0 gets a random number smaller than 50.
So in the end effect, this might result into some pattern. So I would guess everytime you run your code the random feature importance changes. It is always ranked low because it is very unlikely that a good pattern is generated(e.g all 1s get higher than 50 and all 0s get lower than 50).
Finally, yes you should consider to drop the features with low value
I agree with berkay's answer that a random variable can have patterns that are by chance associated to your outcome variable. Secondly, I will neither include random variable in model building nor as my filtering threshold because if random variable has by chance significant or nearly significant association to the outcome it will suppress the expression of important features of original data and you probably end up losing those important features.
In early phase of model development I always include two random variables.
For me it is like a 'sanity check' since these are in effect junk variables or junk features.
If any of my features are worse in importance than the junk features then that is a warning sign that I need to look more carefully at the worth* of those features or to do some better feature engineering.
For example what does theory suggest about the inclusion of those features?

Gpt2 generation of text larger than 1024

I know the context supported by GPT2 is 1024, but I assume there's some technique they utilized to train and generate text longer than that in their results. Also, I saw many gpt2-based repos training text with length longer than 1024. But when I tried generating text using run_generation.py to generate text longer than 1024 it throws a runtime error :The size of tensor a (1025) must match the size of tensor b (1024) at non-singleton dimension 3. I have the following questions:
Shouldn't it be possible to generate longer text since a sliding window is used?
Can you please explain what's necessary to generate longer text? What changes will I have to make to the run_generation.py code?

Batch running spaCy nlp() pipeline for large documents

I am trying to run the nlp() pipeline on a series of transcripts amounting to 20,211,676 characters. I am running on a machine with 8gb RAM. I'm very new at both Python and spaCy, but the corpus comparison tools and sentence chunking features are perfect for the paper I'm working on now.
What I've tried
I've begun by importing the English pipeline and removing 'ner' for faster speeds
nlp = spacy.load('en_core_web_lg', disable = ['ner'])
Then I break up the corpus into pieces of 800,000 characters since spaCy recommends 100,000 characters per gb
split_text = [text[i:i+800000] for i in range(0, len(text), 800000)]
Loop the pieces through the pipeline and create a list of nlp objects
nlp_text = []
for piece in split_text:
piece = nlp(piece)
nlp_text.append(piece)
Which works after a long wait period. note: I have tried upping the threshold via 'nlp.max_length' but anything above 1,200,000 breaks my python session.
Now that I have everything piped through I need to concatenate everything back since I will eventually need to compare the whole document to another (of roughly equal size). Also I would be interested in finding the most frequent noun-phrases in the document as a whole, not just in artificial 800,000 character pieces.
nlp_text = ''.join(nlp_text)
However I get the error message:
TypeError: sequence item 0: expected str instance,
spacy.tokens.doc.Doc found
I realize that I could turn to string and concatenate, but that would defeat the purpose of having "token" objects to works with.
What I need
Is there anything I can do (apart from using AWS expensive CPU time) to split my documents, run the nlp() pipeline, then join the tokens to reconstruct my complete document as an object of study? Am I running the pipeline wrong for a big document? Am I doomed to getting 64gb RAM somewhere?
Edit 1: Response to Ongenz
(1) Here is the error message I receive
ValueError: [E088] Text of length 1071747 exceeds maximum of 1000000.
The v2.x parser and NER models require roughly 1GB of temporary memory
per 100,000 characters in the input. This means long texts may cause
memory allocation errors. If you're not using the parser or NER, it's
probably safe to increase the nlp.max_length limit. The limit is in
number of characters, so you can check whether your inputs are too
long by checking len(text).
I could not find a part of the documentation that refers to this directly.
(2) My goal is to do a series of measures including (but not limited to if need arises): word frequency, tfidf count, sentence count, count most frequent noun-chunks, comparing two corpus using w2v or d2v strategies.
My understanding is that I need every part of the spaCy pipeline apart from Ner for this.
(3) You are completely right about cutting the document, in a perfect world I would cut on a line break instead. But as you mentioned I cannot use join to regroup my broken-apart corpus, so it might not be relevant anyways.
You need to join the resulting Docs using the Doc.from_docs method:
docs = []
for piece in split_text:
doc = nlp(piece)
docs.append(doc)
merged = Doc.from_docs(docs)
See the documentation here fore more details.

After computing the hash, what is the significance of keeping the only the last byte of the hash

Problem: To generate Test and train to improve on Generalization error.
possible solutions:
1. Split instances into train 80% and test 20%, train your model on trainset and tests on testset. But repeating above again and again will somehow let the model cram the data as in multiple time splits will select 1st time chosen instances of the testset into trainset(random sampling.)
The above approach might fail when we fetch an updated dataset.
Another approach is to select each instance's most stable feature/s(combination can be) to create a unique & immutable identifier that will remain robust even after the dataset updates.After selecting one, we could compute a hash of each instance's identifier, keep only the last two bytes of the hash, and put the instance in the test set if the value is <= 256 * test_ratio.}. This will ensure that testset will remain consistent across multiple runs, even if the dataset is refreshed.
Question: What is the significance of just taking last two bytes of the computed hash?
-----Thanks to Aurélien Géron-------
We need a solution to sample a unique test-set even after fetching a updated dataset.
SOLUTION: to use each instance's identifier to decide whether or not it should go to test_set.{Assuming that the instances have a unique and immutable identifier.
we could compute a hash of each instance's identifier, keep only the last bytes of the hash, and put the instance in the test set if value is <= 256*test_ratio i.e 51}
This ensures that the test-set will remain consistent across multiple runs, even if you refresh the dataset. The new test_set will contain 20% of the new instances, but it will not contain any instance that was previosly in the train_set.
First, a quick recap on hash functions:
A hash function f(x) is deterministic, such that if a==b, then f(a)==f(b).
Moreover, if a!=b, then with a very high probability f(a)!=f(b).
With this definition, a function such as f(x)=x%12345678 (where % is the modulo operator) meets the criterion above, so it is technically a hash function.However, most hash functions go beyond this definition, and they act more or less like pseudo-random number generators, so if you compute f(1), f(2), f(3),..., the output will look very much like a random sequence of (usually very large) numbers.
We can use such a "random-looking" hash function to split a dataset into a train set and a test set.
Let's take the MD5 hash function, for example. It is a random-looking hash function, but it outputs rather large numbers (128 bits), such as 136159519883784104948368321992814755841.
For a given instance in the dataset, there is 50% chance that its MD5 hash will be smaller than 2^127 (assuming the hashes are unsigned integers), and a 25% chance that it will be smaller than 2^126, and a 12.5% chance that it will be smaller than 2^125. So if I want to split the dataset into a train set and a test set, with 87.5% of the instances in the train set, and 12.5% in the test set, then all I need to do is to compute the MD5 hash of some unchanging features of the instances, and put the instances whose MD5 hash is smaller than 2^125 into the test set.
If I want precisely 10% of the instances to go into the test set, then I need to checkMD5 < 2^128 * 10 / 100.
This would work fine, and you can definitely implement it this way if you want. However, it means manipulating large integers, which is not always very convenient, especially given that Python's hashlib.md5() function outputs byte arrays, not long integers. So it's simpler to just take one or two bytes in the hash (anywhere you wish), and convert them to a regular integer. If you just take one byte, it will look like a random number from 0 to 255.
If you want to have 10% of the instances in the test set, you just need to check that the byte is smaller or equal to 25. It won't be exactly 10%, but actually 26/256=10.15625%, but that's close enough. If you want a higher precision, you can take 2 or more bytes.

Resources