Implementation of a given mathematical model into CPLEX - modeling

At university i was given a paper about inventory management. The task is now to implement a part of the model into IBM CPLEX and i would therefore need some help.

The CPLEX documentation at https://www.ibm.com/support/knowledgecenter/SSSA5P_12.9.0/ilog.odms.studio.help/Optimization_Studio/topics/COS_home.html has a good number of tutorials and examples. To get started with CPLEX it is a good idea to look at these and the many examples that ship with CPLEX. There are some examples for inventory problems.

Related

NLP (topic modeling) with PLSA

Im trying to understand PLSA (probabilistic latent semantic analysis), to do text modeling (NLP), the problem in every article i red, it's only maths (probabilities), without any semi-algorithme or anything to help you understand that, is there any link where i can understand PLSA please ?
The P in PLSA stands for probablistic and hence I am afraid you may not find any article that does not talk about these. The model itself is a probablistic model and some knowledge of joints, conditionals, independence etc are expected. I would recommend https://medium.com/nanonets/topic-modeling-with-lsa-psla-lda-and-lda2vec-555ff65b0b05 which I found to be the best online resource. There is a bit of Math but most of it is explained well. About PLSA algorithm - I am not sure. It is not used that often and one almost always prefers LDA. I could find a GitHub implementation of solving PLSA using EM here: https://github.com/laserwave/plsa.

Implementation of Multitask Multiple-Kernel Learning in python using shogun

I'm a chemistry student new to the community interested in implementing the paper of Widmer & Raetsch 2012. Particularly, I would like to implement task similarity learning by using powerset approach. I have around ~1500 examples classified in 10 tasks, of which I would like to compute pairwise task similarities.
It looks like
setting custom kernels to mask only certain subtasks
for a certain pair of examples, to add all subtask kernels that the examples are part of
are what's important, but I have no idea how to do this using Shogun in python. Can anybody please guide me to an example code or a tutorial that I can look into?
Thank you so much in advance!

Tools to generate concepts and concept graph for searched articles

When searching a paper using some online library, such as Springer, the returned result will also show the related concept automatically extracted from this paper as well as some knowledge relationship graph based on these concepts. The following is an screenshot of the search output.
I would like to know which kind of algorithms and software are able to generate this kind of output. Are there any open-source tools being able to do that?
The algorithm being used is K-Means. K-Means is an unsupervised clustering algorithm. Articles are clustered by topic. Some articles contain multiple topics, many of which are the same between article. Those shared topics are then branches emerging from the initial topic. SKLearn is a great library for Python that does clustering very well. R is also great for clustering. Hope this helps!

Generating questions from text (NLP)

What approaches are there to generating question from a sentence? Let's say I have a sentence "Jim's dog was very hairy and smelled like wet newspaper" - which toolkit is capable of generating a question like "What did Jim's dog smelled like?" or "How hairy was Jim's dog?"
Thanks!
Unfortunately there isn't one, exactly. There is some code written as part of Michael Heilman's PhD dissertation at CMU; perhaps you'll find it and its corresponding papers interesting?
If it helps, the topic you want information on is called "question generation". This is pretty much the opposite of what Watson does, even though "here is an answer, generate the corresponding question" is exactly how Jeopardy is played. But actually, Watson is a "question answering" system.
In addition to the link to Michael Heilman's PhD provided by dmn, I recommend checking out the following papers:
Automatic Question Generation and Answer Judging: A Q&A Game for Language Learning (Yushi Xu, Anna Goldie, Stephanie Seneff)
Automatic Question Generationg from Sentences (Husam Ali, Yllias Chali, Sadid A. Hasan)
As of 2022, Haystack provides a comprehensive suite of tools to accomplish the purpose of Question generation and answering using the latest and greatest Transformer models and Transfer learning.
From their website,
Haystack is an open-source framework for building search systems that work intelligently over large document collections. Recent advances in NLP have enabled the application of question answering, retrieval and summarization to real world settings and Haystack is designed to be the bridge between research and industry.
NLP for Search: Pick components that perform retrieval, question answering, reranking and much more
Latest models: Utilize all transformer based models (BERT, RoBERTa, MiniLM, DPR) and smoothly switch when new ones get published
Flexible databases: Load data into and query from a range of databases such as Elasticsearch, Milvus, FAISS, SQL and more
Scalability: Scale your system to handle millions of documents and deploy them via REST API
Domain adaptation: All tooling you need to annotate examples, collect user-feedback, evaluate components and finetune models.
Based on my personal experience, I am 95% successful in generating Questions and Answers in my Internship for training purposes. I have a sample web user interface to demonstrate and the code too. My Web App and Code.
Huge shoutout to the developers on the Slack channel for helping noobs in AI like me! Implementing and deploying a NLP model has never been easier if not for Haystack. I believe this is the only tool out there where one can easily develop and deploy.
Disclaimer: I do not work for deepset.ai or Haystack, am just a fan of haystack.
As of 2019, Question generation from text has become possible. There are several research papers for this task.
The current state-of-the-art question generation model uses language modeling with different pretraining objectives. Research paper, code implementation and pre-trained model are available to download on the Paperwithcode website link.
This model can be used to fine-tune on your own dataset (instructions for finetuning are given here).
I would suggest checking out this link for more solutions. I hope it helps.

Natural Language Processing Algorithm for mood of an email

One simple question (but I haven't quite found an obvious answer in the NLP stuff I've been reading, which I'm very new to):
I want to classify emails with a probability along certain dimensions of mood. Is there an NLP package out there specifically dealing with this? Is there an obvious starting point in the literature I start reading at?
For example, if I got a short email something like "Hi, I'm not very impressed with your last email - you said the order amount would only be $15.95! Regards, Tom" then it might get 8/10 for Frustration and 0/10 for Happiness.
The actual list of moods isn't so important, but a short list of generally positive vs generally negative moods would be useful.
Thanks in advance!
--Trindaz on Fedang #NLP
You can do this with a number of different NLP tools, but nothing to my knowledge comes with it ready out of the box. Perhaps the easiest place to start would be with LingPipe (java), and you can use their very good sentiment analysis tutorial. You could also use NLTK if python is more your bent. There are some good blog posts over at Streamhacker that describe how you would use Naive Bayes to implement that.
Check out AlchemyAPI for sentiment analysis tools and scikit-learn or any other open machine learning library for the classifier.
if you have not decided to code the implementation, you can also have the data classified by some other tool. google prediction api may be an alternative.
Either way, you will need some labeled data and do the preprocessing. But if you use a tool that may help you get better accuracy easily.

Resources