What is the easiest way to operationalize Python code? - python-3.x

I am new to writing Python code. I have currently written a few modules for data analysis projects. The data is queried from AWS Redshift tables and summarized in CSVs and Excel spreadsheets.
At this point I do not want to pass it on other users in the org as I do not want to expose the code.
Is there an easy way to operationalize the code without exposing it?
PS: I am in the process of learning front-end development (Flask, HTML, CSS) so users can input data and get results back.

Python programs are almost always shipped as bare source. There are ways of compiling Python code into binaries, but this is not a common thing to do and usually I would not recommend it, as it's not as easy as one might expect (which is too bad, really).
That said, you can check out cx_Freeze and Cython.

Related

Best approach to first use of Python with Google Sheets, to query API in GitHub and Jira?

This question is about process / approach, more so than how to write the code itself. I'm a process learner, so this is the part that's creating personal anxiety for me.
I am very much a beginner, and still learning about importing libraries and the like. I have an idea for what I'd like to be able to do, for a Capstone Project, as I learn, however.
I have a spreadsheet that I use each Sprint as par of our Capacity Planning process. I want to use Python to query target tickets in our client's GitHub (while logged in) account, and our Jira account, to pull specific data into the cells that I currently populate manually. Others have expressed interest in seeing what I come up with, as they use the same Google sheets template similarly.
From Sheets for Developers > API v4, through trial and error, I should be able to figure out how to generally import data into Google Sheets. Likewise, this GoTrained Python Tutorial looks like it has an approach for obtaining information from GitHub API. I'm fairly certain that I can find similar for Jira (though the first site that I tried wanted to use a fake "captcha" script to trick me into accepting notifications from the site, which was a red flag, to me).
But which are the quality, most efficient approaches? Especially for a starting out Python beginner, like myself? The last time I coded was 15-20 years ago, using LPC to build rooms/mobs/objects on a MU*, accessed via Telnet protocol.
I need to learn more about how to set up the program, and which libraries might be useful; and the best way - after decomposition - to identify the components and which methods to use, generally, in solving for the project goal:
import select field data from Jira and GitHub to a Sheet, using Python
how do I know which libraries are best to import, like Tkinter, for functions that I will need (this one came up in search for creating dropdown lists in Python, so that the Repo names can be standardized).
seeing lots of references to REST-api, but we haven't talked about that in course yet
what are some quality resources to learn more about principles that I should understand better before attempting this project?
w3schools.com is on my radar, but it is also extensive -- not sure if there are resources honed in on this type of "challenge"

Is there a way to use the deap library in grasshopper

Is there a way I can use the deap library inside grasshopper's Python node
I want to run a genetic algorithm but the fitness function is to be calculated by grasshopper (only the fitness function, all the other things are to be taken of by deap inside the python node)
can it be done?
I am having problem with
importing the deap library in grasshopper's Python interface(I think I will be able to solve it by copying the files manually from Python path)
(major problem) grashopper doesn't allow closed loops so I cant seem to find a way to feed the fitness back into the Python node with the main code
couldnt get it to work, had to make do with the grasshopper pluggins
the problem was that you can only install iron python libraries for grasshopper
These are two well known issues with 'out-of-the-box' grasshopper but there are several plugins that can help overcome them.
Question One
The basic GHPython component uses Iron Python and can limit which libraries are compatible and able to be used. To get around this constraint there is a plugin called 'GH_CPython'. It allows you to set a locally installed python interpreter for your code, and then have access to any libraries available to that local interpreter. So if you install deap Libary locally then it will be available within the grasshopper GH_Cpython editor. Here is a link to download and install GH_CPython: https://www.food4rhino.com/en/app/ghcpython
Question Two
As you noted, Grasshopper is procedural and has limited support for recursive routines. To get around this there are several plugins that support recursion and may be able to help with your implementation. Which plugin would be best for your situation is difficult to say without a deeper description of your goals. Here are several options, each option provides recursive functionality that would allow for 'closed loops' where results of a script can be fed back as input.
Hoopsnake - very basic and has been around the longest
Anemone - A little more flexible and uses multiple components for loop start and end for cleaner-looking scripts. It also has a 'record history' functionality.
Octopus - Has a 'Loop' component that is similar to Hoopsnake. It also has a 'record history' functionality.

Suitescript - 1 big script file, or multiple smaller files

From a performance/maintenance point of view, is it better to write my custom modules with netsuite all as one big JS, or multiple segmented script files.
If you compare it with a server side javascript language, say - Node.js the most popular, every module is written into separate file.
I generally take the approach of Object oriented javascript and put each class in a separate file which helps to organise the code.
One of the approach you can take is in development keep separate files and finally merge all files using js minifier tool like Google closure compiler when you deploy your code for production usage which can give you best of both worlds, if you are really bothered about every nano/mini seconds of performance.
If you see SuiteScript 2.0 architecture, it encourages module architecture which is easier to manage as load only those modules that you need, and it is easier to maintain multiple code files i.e. one per module considering future enhancements, bug fixes and code reuse.
Performance can never be judge by the line count of your module. We generally maintain modules for maintaining the readability and simplicity of the code. It is a good practice to put all generic functionalities in to an Utility script and use it as a library across all the modules. Again it depends on your code logic and programming style. So if you want to create multiple segments of your js file for more readability I dont think its a bad idea.

Selenium and Node.js: simple idea. Is it possible?

For work I need to extract data from websites and write this data in in a CSV file, at this stage I'm using Selenium and Perl (and this very powerful couple) but yesterday I thinked to this solution:
Selenium IDE ---via JS--->Web app on Node.js Webserver------> CSV
Do you think is it possible? Or there is another "elegant" solutions?
The idea is general, so I can use for data storage, but the testers can use this for improving their tests using the stored variables, so it's for general purpose.
For purpose of scraping you can use jsdom module like shown here
http://blog.nodejitsu.com/jsdom-jquery-in-5-lines-on-nodejs
for purpose of generating CSV this module is nice
https://github.com/koles/ya-csv
But there are easier ways to do it like using Mechanize in Perl, Ruby, Python

Data manipulating environment

I am looking for something* to aid me in manipulating and interpreting data.
Data of the names, addresses and that sorts.
Currently, I am making heavy use of Python to find whether one piece of information relate to another, but I am noticing that a lot of my code could easily be substituted with some sort of Query Language.
Mainly, I need an environment where I can import data in any format, be it xml, html, csv, or excel or database files. And I wish for the software to read it and tell me what columns there are etc., so that I can only worry about writing code that interprets it.
Does this sound concrete enough, if so, anyone in possession of such elegant software?
*Can be a programming language, IDE, combination of those.
Have you looked at the Pandas module in Python? http://pandas.pydata.org/pandas-docs/stable/
When combined with Ipython notebook, it makes a great data manipulation platform.
I think it may let you do a lot of what you want to do. I am not sure how well it handles html, but it's built to handle csv, excel and database files

Resources