reinforcement learning in NodeJS? - node.js

I am looking for a solution to train an AI in NodeJS using reinforcement learning.
So far I could only find solutions in python...
I want an AI to give a buy/sell trigger based on price data and some technical indicators.
Edit: I know python is way better for ML, but wanted to keep everything in typescript. Anyways ended up using python :D

Node.JS isn't really geared up for AI / ML on its own.
You're better off implementing a solution that's either pure Python (you can do a lot with Python - including most of what Node.JS can do), or a solution that involves exchanging the data between the Node.JS part and the Python ML part.
Without any specific info on your implementation it's hard to give an exact answer, but assuming you use the Node.JS part to both receive price data and send the triggers, you could pipe the price data into the Python ML model through something like gRPC and then return the result back to the Node.JS app to implement the logic for sending the triggers.

Related

Source Finding Using TDOA in Python

I have recently gone through this paper. I want to implement it in python. I am not getting how to implement it. I am very new to this topic. I am not able to understand what equations have they used. I have read this but I am not able to relate this to the paper.
How to approach it using python? I want to build a function which can tell me the possible source location. Also, I am not able to understand the dataset also.

Best approach to first use of Python with Google Sheets, to query API in GitHub and Jira?

This question is about process / approach, more so than how to write the code itself. I'm a process learner, so this is the part that's creating personal anxiety for me.
I am very much a beginner, and still learning about importing libraries and the like. I have an idea for what I'd like to be able to do, for a Capstone Project, as I learn, however.
I have a spreadsheet that I use each Sprint as par of our Capacity Planning process. I want to use Python to query target tickets in our client's GitHub (while logged in) account, and our Jira account, to pull specific data into the cells that I currently populate manually. Others have expressed interest in seeing what I come up with, as they use the same Google sheets template similarly.
From Sheets for Developers > API v4, through trial and error, I should be able to figure out how to generally import data into Google Sheets. Likewise, this GoTrained Python Tutorial looks like it has an approach for obtaining information from GitHub API. I'm fairly certain that I can find similar for Jira (though the first site that I tried wanted to use a fake "captcha" script to trick me into accepting notifications from the site, which was a red flag, to me).
But which are the quality, most efficient approaches? Especially for a starting out Python beginner, like myself? The last time I coded was 15-20 years ago, using LPC to build rooms/mobs/objects on a MU*, accessed via Telnet protocol.
I need to learn more about how to set up the program, and which libraries might be useful; and the best way - after decomposition - to identify the components and which methods to use, generally, in solving for the project goal:
import select field data from Jira and GitHub to a Sheet, using Python
how do I know which libraries are best to import, like Tkinter, for functions that I will need (this one came up in search for creating dropdown lists in Python, so that the Repo names can be standardized).
seeing lots of references to REST-api, but we haven't talked about that in course yet
what are some quality resources to learn more about principles that I should understand better before attempting this project?
w3schools.com is on my radar, but it is also extensive -- not sure if there are resources honed in on this type of "challenge"

What is the easiest way to operationalize Python code?

I am new to writing Python code. I have currently written a few modules for data analysis projects. The data is queried from AWS Redshift tables and summarized in CSVs and Excel spreadsheets.
At this point I do not want to pass it on other users in the org as I do not want to expose the code.
Is there an easy way to operationalize the code without exposing it?
PS: I am in the process of learning front-end development (Flask, HTML, CSS) so users can input data and get results back.
Python programs are almost always shipped as bare source. There are ways of compiling Python code into binaries, but this is not a common thing to do and usually I would not recommend it, as it's not as easy as one might expect (which is too bad, really).
That said, you can check out cx_Freeze and Cython.

can python understand a human language?

I started learning python some months ago.
However, I came accRoss a job online and my client wanted me to write a web application to analyse stories and get features like title, characters, proverbs, morals and songs.
I achieved this through labeling and indexing.
But he further stated that he wanted the code to have the ability to understand the story and generate morals and some other things by itself without labelling the moral to fetch locally in the code.
The stories are in a Nigerian language and I don't know if this is possible.
Please is this possible?

scikit learn task managment library

Update:
after some extra search. I thin I am overuse scikit-learn. if I want a production ML tools. I should use something like mahout which built on hadoop. scikit-learn is more like a toy tools for experiment ideas.
I am new to scikit-learn. I try to use scikit-learn to train a model, I want to experiment different feature combinationes and data pre-processing techniques. Each experiment will takes few hours(in order to minimize error, I will run every experiment 10 times with different train-test split), So I wrote some python script to run experiment one by one automatically, when an experiment is done, it will send me an email.
It works well, I found another server that is available to run my experiment today, it seems reasonable I should write some script that can run experiments in a distribution-fashion. There are big data platforms like hadoop, but I find that it is not for python and scikit-learn(please point out to me If my understanding of hadoop is wrong).
Because scikit-learn is an "old" library, so I think there should have existing libraries that have these capabilities that I want. or I am running in wrong direction of scikit-learn?
I try to google "scikit-learn task managment", But nothing I want turn out. other key word to search is also very welcome.
See "Experimentation frameworks" at http://scikit-learn.org/dev/related_projects.html

Resources