Monte Carlo Simulation code/ library in Python - python-3.x

I am trying to learn how to apply Monte Carlo simulation in Python for predicting/estimating time series data such as sales/deposits volumes, interest rates etc. I can understand the basic idea behind the method.
So I am wondering if there is anywhere a practical example of step by step explanation on how to develop the Monte Carlo model (i.e. choose distribution, apply for loop for selected parameters etc.) As you can understand I am new to the 'sport'.
Any help or suggestion/assistance is much appreciated.
Thanks
I looked at Python libraries, such us Statsmodels, but I couldn't find any relevant simulation model. I looked also at examples such as the one below:
Monte Carlo Simulation in Python
but I am searching for something that elaborates more on the application process.

Related

How to know which features contribute significantly in prediction models?

I am novice in DS/ML stuff. I am trying to solve Titanic case study in Kaggle, however my approach is not systematic till now. I have used correlation to find relationship between variables and have used KNN and Random Forest Classification, however my models performance has not improved. I have selected features based on the result of correlation between variables.
Please guide me if there are certain sk-learn methods which can be used to identify features which can contribute significantly in forecasting.
Through Various Boosting Techniques You can Improve accuracy approx 99% I suggest you to use Gradient Boosting.

Replicating Semantic Analysis Model in Demo

Good day, I am a student that is interested in NLP. I have come across the demo on AllenNLP's homepage, which stated that:
The model is a simple LSTM using GloVe embeddings that is trained on the binary classification setting of the Stanford Sentiment Treebank. It achieves about 87% accuracy on the test set.
Is there any reference to the sample code or any tutorial that I can follow to replicate this result, so that I can learn more about this subject? I am trying to obtain a Regression Output (Instead of classification).
I hope that someone can point me in the right direction.. Any help is much appreciated. Thank you!
AllenAI provides all code for examples and lib opensource on Git, including AllenNLP.
I found exactly how the example was run here: https://github.com/allenai/allennlp/blob/master/allennlp/tests/data/dataset_readers/stanford_sentiment_tree_bank_test.py
However, to make it a Regression task, you'll have to tweak directly on Pytorch, which is the underlying technology for AllenNLP.

What are the algorithms used in the pROC package for ROC analysis?

I am trying to figure out what algorithms are used within the pROC package to conduct ROC analysis. For instance what algorithm corresponds to the condition 'algorithm==2'? I only recently started using R in conjunction with Python because of the ease of finding CI estimates, significance test results etc. My Python code uses Linear Discriminant Analysis to get results on a binary classification problem. When using the pROC package to compute confidence interval estimates for AUC, sensitivity, specificity, etc., all I have to do is load my data and run the package. The AUC I get when using pROC is the same as the AUC that is returned by my Python code that uses Linear Discriminant Analysis (LDA). In order to be able to report consistent results I am trying to find out if LDA is one of the algorithm choices within pROC? Any ideas on this or how to go about figuring this out would be very helpful. Where can I access the source code for pROC?
The core algorithms of pROC are described in a 2011 BMC Bioinformatics paper. Some algorithms added later are described in the PDF manual. As every CRAN package, the source code is available from the CRAN package page. As many R packages these days it is also on GitHub.
To answer your question specifically, unfortunately I don't have a good reference for the algorithm to calculate the points of the ROC curve with algorithm 2. By looking at it you will realize it is ultimately equivalent to the standard ROC curve algorithm, albeit more efficient when the number of thresholds increases, as I tried to explain in this answer to a question on Cross Validated. But you have to trust me (and most packages calculating ROC curves) on it.
Which binary classifier you use, whether LDA or other, is irrelevant to ROC analysis, and outside the scope of pROC. ROC analysis is a generic way to assesses predictions, scores, or more generally signal coming out of a binary classifier. It doesn't assess the binary classifier itself, or the signal detector, only the signal itself. This makes it very easy to compare different classification methods, and is instrumental to the success of ROC analysis in general.

When and why would you want to use a Probability Density Function?

A wanna-be data-scientist here and am trying to understand as a data scientist, when and why would you use a Probability Density Function (PDF)?
Sharing a scenario and a few pointers to learn about this and other such functions like CDF and PMF would be really helpful. Know of any book that talks about these functions from practice stand-point?
Why?
Probability theory is very important for modern data-science and machine-learning applications, because (in a lot of cases) it allows one to "open up a black box" and shed some light into the model's inner workings, and with luck find necessary ingredients to transform a poor model into a great model. Without it, a data scientist's work is very much restricted in what they are able to do.
A PDF is a fundamental building block of the probability theory, absolutely necessary to do any sort of probability reasoning, along with expectation, variance, prior and posterior, and so on.
Some examples here on StackOverflow, from my own experience, where a practical issue boils down to understanding data distribution:
Which loss-function is better than MSE in temperature prediction?
Binary Image Classification with CNN - best practices for choosing “negative” dataset?
How do neural networks account for outliers?
When?
The questions above provide some examples, here're a few more if you're interested, and the list is by no means complete:
What is the 'fundamental' idea of machine learning for estimating parameters?
Role of Bias in Neural Networks
How to find probability distribution and parameters for real data? (Python 3)
I personally try to find probabilistic interpretation whenever possible (choice of loss function, parameters, regularization, architecture, etc), because this way I can move from blind guessing to making reasonable decisions.
Reading
This is very opinion-based, but at least few books are really worth mentioning: The Elements of Statistical Learning, An Introduction to Statistical Learning: with Applications in R or Pattern Recognition and Machine Learning (if your primary interest is machine learning). That's just a start, there are dozens of books on more specific topics, like computer vision, natural language processing and reinforcement learning.

I'm trying to implement 'multi-threading' to do both training and prediction(testing) at the same time

I'm trying to implement 'multi-threading' to do both training and prediction(testing) at the same time. And I'm gonna use the python module 'threading' as shown in https://www.tensorflow.org/api_docs/python/tf/FIFOQueue
And the followings are questions.
If I use the python module 'threading', does tensorflow use more portion of gpu or more portion of cpu?
Do I have to make two graphs(neural nets which have the same topology) in tensorflow one for prediction and the other for training? Or is it okay to make just one graph?
I'll be very grateful to anyone who can answer these questions! thanks!
If you use python threading module, it will only make use of cpu; also python threading not for run time parallelism, you should use multiprocessing.
In your model if you are using dropout or batch_norm like ops which change based on training and validation, it's a good idea to create separate graphs, reusing (validation graph will reuse all training variables) the common variable for validation/testing.
Note: you can use one graph also, with additional operations which changes behaviors based on training/validation.

Resources