Does anyone have any idea of how to apply meta-learning for neural machine translation?
I have read some papers that consider multiple language pairs as different tasks for meta-learning. Is it possible to do it just with one pair of language? for example translation from English to a low-resource language by using meta-learning.
I am not a specialist in this domain. Is there any public code that I can use?
Thank you in advance for your help.
I don't want to implement the code from scratch. I am looking for a public and open-source code to use and then modify for my specific problem.
Related
I was thinking about how train the Universal Sentence Enconder in Portuguese, can you share some tips, what kind of dataset I need for example, transfer learning make a sense?
Thank you!
you can use the TensorFlow-Hub module in a TensorFlow model that trains on Portuguese tasks. So which tasks/data to use is the main question, and that's outside the realm of tensorflow-hub, i.e. you can ask that question and tag it with "machine-learning" and "nlp".
You might also want to take a look at links such as
http://www.nltk.org/howto/portuguese_en.html
https://stackoverflow.com/search?q=Portuguese+corpus
I am new to Devnagaric NLP, Is there any group or resources that would help me get started with NLP in Devnagaric language(Mostly Nepali language or similar like Hindi). I want to be able to develop fonts for Devanagaric and also do some font processing application. If anyone (working in this field), could give me some advice then it would be highly appreciable.
Thanks in advance
I am new to Devnagaric NLP, Is there any group or resources that would help me get started with NLP in Devnagaric language(Mostly Nepali language or similar like Hindi)
You can use embeddings given by fasttext [https://fasttext.cc/docs/en/pretrained-vectors.html#content] and use some deep learning RNN models like LSTM for text-classification, sentiment analysis.
You can find some datasets for named entity recoginition here [http://ltrc.iiit.ac.in/ner-ssea-08/index.cgi?topic=5]
For Processing Indian languages, you can refer here [https://github.com/anoopkunchukuttan/indic_nlp_library]
Nltk supports the indian lanugages, for pos tagging and nlp related tasks you can refer here [http://www.nltk.org/_modules/nltk/corpus/reader/indian.html]
Is there any group or resources that would help me get started with NLP in Devnagaric language?
The Bhasa Sanchar project under Madan Puraskar Pustakalaya has developed a Nepali corpus. You may request a Nepali corpus for non-commerical purposes from the contact provided in the link above.
Python's NLTK has the Hindi Language corpus. You may import it using
from nltk.corpus import indian
For gaining insight to Devnagari based NLP, I suggest you go through research papers.Nepali being an under-resourced language;much work yet to be done, and it might be difficult to get contents for the same.
You should probably look into language detection,text classification,sentiment analysis among others (preferably based on POS tagging library from the corpus) for grasping the basics.
For the second part of the question
I am pretty sure font development doesn't come under the domain of Natural Language Processing. Did you mean something else?
Are there any tools that analyze the meaning of given sentences? Recommendations are greatly appreciated.
Thanks in advance!
I am also looking for similar tools. One thing I found recently was this sentiment analysis tool built by researchers at Stanford.
It provides a model of analyzing the sentiment of a given sentence. It's interesting and even this seemingly simple idea is quite involved to model in an accurate way. It utilizes machine learning to develop higher accuracy as well. There is a live demo where you can input sentences to analyze.
http://nlp.stanford.edu/sentiment/
I also saw this RelEx semantic dependency relationship extractor.
http://wiki.opencog.org/w/Sentence_algorithms
Some natural language understanding tools can analyze the meaning of sentences, including NLTK and Attempto Controlled English. There are several implementations of discourse representation structures and semantic parsers with a similar purpose.
There are also several parsers that can be used to generate a meaning representation from the text that is being parsed.
I am looking for a tool that can analyze the emotion of short texts. I searched for a week and I couldn't find a good one that is publicly available. The ideal tool is one that takes a short text as input and guesses the emotion. It is preferably a standalone application or library.
I don't need tools that is trained by texts. And although similar questions are asked before no satisfactory answers are got.
I searched the Internet and read some papers but I can't find a good tool I want. Currently I found SentiStrength, but the accuracy is not good. I am using emotional dictionaries right now. I felt that some syntax parsing may be necessary but it's too complex for me to build one. Furthermore, it's researched by some people and I don't want to reinvent the wheels. Does anyone know such publicly/research available software? I need a tool that doesn't need training before using.
Thanks in advance.
I think that you will not find a more accurate program than SentiStrength (or SoCal) for this task - other than machine learning methods in a specific narrow domain. If you have a lot (>1000) of hand-coded data for a specific domain then you might like to try a generic machine learning approach based on your data. If not, then I would stop looking for anything better ;)
Identifying entities and extracting precise information from short texts, let alone sentiment, is a very challenging problem specially with short text because of lack of context. Hovewer, there are few unsupervised approaches to extracting sentiments from texts mainly proposed by Turney (2000). Look at that and may be you can adopt the method of extracting sentiments based on adjectives in the short text for your use-case. It is hovewer important to note that this might require you to efficiently POSTag your short text accordingly.
Maybe EmoLib could be of help.
I have started working on a project which requires Natural Language Processing. We have do the spell checking as well as mapping sentences to phrases and their synonyms. I first thought of using GATE but i am confused on what to use? I found an interesting post here which got me even more confused.
http://lordpimpington.com/codespeaks/drupal-5.1/?q=node/5
Please help me decide on what suits my purpose the best. I am working a web application which will us this NLP tool as a service.
You didn't really give much info, but try this: http://www.nltk.org/
I don't think NLTK does spell checking (I could be wrong on this), but it can do parts of speech tagging for text input.
For finding/matching synonyms you could use something like WordNet http://wordnet.princeton.edu/
If you're doing something really domain specific: I would recommend coming up with your own ontology for domain specific terms.
If you are using Python you can develop a spell checker with Python Enchant.
NLTK is good for developing Sentiment Analysis system too. I have some prototypes of the same too
Jaggu
If you are using deep learning based models, and if you have sufficient data, you can implement task specific models for any purpose. With the development of deep leaning based languages models, you can used word embedding based models with lexicon resources to obtain synonyms and antonyms. You can also follow the links below to obtain more resources.
https://stanfordnlp.github.io/CoreNLP/
https://www.nltk.org/
https://wordnet.princeton.edu/