What value to set for max_len in pad sequences? - keras

Does the value of max_len in pad sequences for deep learning depend upon the use case? Suppose if it was a Twitter related classification, should the value be set to 280 (280 is the maximum length of characters in tweets)?

Absolutely not, After you converted texts into sequences by tokenizer which had been fitted on list of tweets, you could iterate over these sequences to derive the length of seqeunces.
the max_len parameter in pad_sqeuences function refer to the maximum length of the sequence, so it won't mean the length of a tweet based on its characters, but also it means the length of sequence.
and after that, you don't need to set it the maximum length of the tweets sequences, even you could set it lower than that. but notice by this approach, it would be better to remove stopwords and filter characters before you fit tokenizer on the list of tweets.

Related

Subword vector in fastText?

I can't figure out what a subword input vector is. I read in the newspaper that the subword is hashed, the subword is the hash code, hash code is a number, not a vector
Ex: Input vector of word eating is [0,0,0,1,0,0,0,0,0]
So what is the input vector of subwords "eat", "ati", "ing",...?
Link paper: https://arxiv.org/pdf/1607.04606.pdf
enter image description here
the subword is the hash code, hash code is a number, not a vector
The FastText subwords are, as you've suggested, fragments of the full word. For the purposes of subword creation, FastText will also prepend/append special start-of-word and end-of-word characters. (If I recall correctly, it uses < & >.)
So, for the full word token 'eating', it is considered as '<eating>'.
All the 3-character subwords would be '<ea', 'eat', 'ati', 'tin', 'ing', 'ng>'.
All the 4-character subwords would be '<eat', 'atin', 'ting', 'ing>'.
All the 5-character subwords would be '<eati', 'ating', 'ting>'.
I see you've written out a "one-hot" representation of the full word 'eating' – [0,0,0,1,0,0,0,0,0] – as if 'eating' is the 4th word in a 9-word vocabulary. While diagrams & certain ways of thnking about the underlying model may consider such a one-hot vector, it's useful to realize that in actual code implementations, such a sparse one-hot vector for words is never actually created.
Instead, it's just represented as a single number – the index to the non-zero number. That's used as a lookup into an array of vectors of the configured 'dense' size, returning one input word-vector of that size for the word.
For example, imagine you have a model with a 1-million word known vocabulary, which offers 100-dimensional 'dense embedding' word-vectors. The word 'eating' is the 543,210th word.
That model will have an array of input-vectors that's has one million slots, and each slot has a 100-dimensional vector in it. We could call it word_vectors_in. The word 'eating''s vector will be at word_vectors_in[543209] (beccause the 1st vector is at word_vectors_in[0]).
At no point during the creation/training/use of this model will an actual 1-million-long one-hot vector for 'eating' be created. Most often, it'll just be referred-to inside the code as the word-index 543209. The model will have a helper lookup dictionary/hashmap, let's call it word_index that lets code find the right slot for a word. So word_index['eating'] will be 543209.
OK, now to your actual question, about the subwords. I've detailed how the the single vectors per one known full word are stored, above, in order to contrast it with the different way subwords are handled.
Subwords are also stored in a big array of vectors, but that array is treated as a collision-oblivious hashtable. That is, by design, many subwords can and do all reuse the same slot.
Let's call that big array of subword vectors subword_vector_in. Let's also make it 1 million slots long, where each slot has a 100-dimensional vector.
But now, there is no dictionary that remembers which subwords are in which slots - for example, remembering that subword '<eat' is in arbitrary slot 78789.
Instead, the string '<eat' is hashed to a number, that number is restricted to the possible indexes into the subwords, and the vector at that index, let's say it's 12344, is used for the subword.
And then when some other subword comes along, maybe '<dri', it might hash to the exact-same 12344 slot. And that same vector then gets adjusted for that other subword (during training), or returned for both those subwords (and possibly many others) during later FastText-vector synthesis from the finali model.
Notably, now even if there are far more than 1-million unique subwords, they can all be represented inside that single 1-million slot array, albeit with collisions/interference.
In practice, the collisions are tolerable because many collisions from very-rare subwords essentially just fuzz slots with lots of random noise that mostly cancels out. For the most-common subwords, that tend to carry any unique meaning because of the way word-roots/prefixes/suffixes hint at word meaning in English & similar langauges, those very-common examples overpower the other noise, and ensure that slot, for at least one or more of its most-common subwords, carries at least some hint of the subword's implied meaning(s).
So when FastText assembles its final word-vector, by adding:
word_vector_in[word_index['eating']] # learned known-word vector
+ subword_vector_in[slot_hash('<ea')] # 1st 3-char subword
+ subword_vector_in[slot_hash('eat')]
+ subword_vector_in[slot_hash('ati')]
... # other 3-char subwords
... # every 4-char subword
... # other 5-char subwords
+ subword_vector_in[slot_hash('ting>')] # last 5-char subword
…it gets something that's dominated by the (likely stronger-in-magnitude) known full-word vector, with some useful hints of meaning also contributed by the (probably lower-magnitude) many noisy subword vectors.
And then if we were to imagine that some other word that's not part of the known 1-million word vocabulary comes along, say 'eatery', it has nothing from word_vector_in for the full word, but it can still do:
subword_vector_in[slot_hash('<ea')] # 1st 3-char subword
+ subword_vector_in[slot_hash('eat')]
+ subword_vector_in[slot_hash('ate')]
... # other 3-char subwords
... # every 4-char subword
... # other 5-char subwords
+ subword_vector_in[slot_hash('tery>')] # last 5-char subword
Because at least a few of those subwords likely include some meaningful hints of the meaning of the word 'eatery' – especially meanings around 'eat' or even the venue/vendor aspects of the suffix -tery, this synthesized guess for an out-of-vocabulary (OOV) word will be better than a random vector, & often better than ignoring the word entirely in whatever upper-level process is using the FastText vectors.

Can we segregate gibberish from meaningful sentences just by looking at the features of the 512 dimensional Universal Sentence Encoder Vector?

Universal Sentence Encoder encodes sentences into a vector of 512 features. My proposition is that if a sentence is gibberish then most of the features will be very close to zero. However, if a sentence has meaning then some of the features out of the 512 features would be much greater than or much lesser than zero. Can we then, just by seeing the vector feature's weight distribution decide which vector encodes meaning and which vector encodes gibberish ?
It seems that the USE encodes features in a very arbitrary fashion. I conducted a lot of experiments and saw that the features scaled up and down in an arbitrary fashion without regard to the sentence being gibberish or meaningful. The experiments include counting the number of positives and negative features in a meaningful and gibberish vector, finding the mean and standard distribution of the features. But nothing bore any pattern which can delineate the two.Attached are the screenshots.
Below is sample 2 . Many more samples (around 30) were taken and no pattern in count of positive-negative features, standard dev and mean was observed which can separate a gibberish USE vector from a meaningful one.

How to set maximum sentence length in spacy?

I have a string I converted to a spacy Doc. However, when I iterate through the Doc.sents object, I get sentences I found they are too long.
Is there a way when doing doc = nlp(string) to set the maximum length for a single sentence?
Thanks a lot, this would really help.
No, there is no way to do this.
In normal language, while practically sentences don't get too long, there's no strict limit on the length of a sentence. Imagine a list of all fruits or something.
Partly because of that, it's not clear what to do with overlong sentences. Do you split them into segments of the max length or less? Do you throw them out entirely, or cut off words after the first chunk? The right approach depends on your application.
It should typically be easy to implement the strategy you want on top of the .sents iterator.
To split sentences into a max length or less you can do this:
def my_sents(doc, max_len):
for sent in doc.sents:
if len(sent) < max_len:
yield sent
continue
# this is a long one
offset = 0
while offset < len(sent):
yield sent[offset:offset+max_len]
offset += max_len
However, note that for many applications this isn't useful. If you have a max length for sentences you should really think about why you have it and adjust your approach based on that.

How do I limit word length in FastText?

I am using FastText to compute skipgrams on a corpus containing a long sequence of characters with no spaces. After an hour or so, FastText produces a model containing vectors (of length 100) corresponding to "words" of length 50 characters from the corpus.
I tried setting -minn and -maxn parameters, but that does not help (I kind of knew it won't, but tried anyway), and -wordNgrams parameter only applies if there are spaces, I guess (?!). This is just a long stream of characters representing state, without spaces.
The documentation doesn't seem to have any information on this (or perhaps I'm missing something?)
The tool just takes whatever space-delimited tokens you feed it.
If you want to truncate, or discard, tokens that are longer than 50 characters (or any other threshold), you'd need to preprocess the data yourself.
(If your question is actually something else, add more details to the question showing example lines from your corpus, how you're invoking fasttext on it, how you are reviewing unsatisfactory results, and how you would expect satisfactory results to look instead.

Tensorflow : RNN with char input

Suppose I want to train an RNN on pseudo-random words (not part of any dictionary) so I can't use word2vec. How can I represent each char in the word using tensorflow?
If you are just doing characters you can just use a one hot vector of size 128 which can represent every ascii character (you may want to use smaller since I doubt you will use all ascii characters, maybe just 26 for every letter). You don't really need to use anything like word vectors since the range of possibilities is small.
Actually when you use the one hot encodings you are kind of learning vectors for each character. Say your first dense layer (or rnn layer) contains 100 neurons. Then this would result in a 128x100 matrix multiply with the one hot encoding. Since all but one of the values is non zero you are essentially selecting a single row of size 100 from the matrix which is a vector representation of that character. Essentially that first matrix is just a list of the vectors which represent each character and your model will learn these vector representations. Due to the sparseness of the one hot encodings it is often faster to just look up the row rather than carry out the full matrix multiply. This is what the tf.nn.embedding_lookup or tf.gather function is used for.

Resources