Best approach to first use of Python with Google Sheets, to query API in GitHub and Jira? - python-3.x

This question is about process / approach, more so than how to write the code itself. I'm a process learner, so this is the part that's creating personal anxiety for me.
I am very much a beginner, and still learning about importing libraries and the like. I have an idea for what I'd like to be able to do, for a Capstone Project, as I learn, however.
I have a spreadsheet that I use each Sprint as par of our Capacity Planning process. I want to use Python to query target tickets in our client's GitHub (while logged in) account, and our Jira account, to pull specific data into the cells that I currently populate manually. Others have expressed interest in seeing what I come up with, as they use the same Google sheets template similarly.
From Sheets for Developers > API v4, through trial and error, I should be able to figure out how to generally import data into Google Sheets. Likewise, this GoTrained Python Tutorial looks like it has an approach for obtaining information from GitHub API. I'm fairly certain that I can find similar for Jira (though the first site that I tried wanted to use a fake "captcha" script to trick me into accepting notifications from the site, which was a red flag, to me).
But which are the quality, most efficient approaches? Especially for a starting out Python beginner, like myself? The last time I coded was 15-20 years ago, using LPC to build rooms/mobs/objects on a MU*, accessed via Telnet protocol.
I need to learn more about how to set up the program, and which libraries might be useful; and the best way - after decomposition - to identify the components and which methods to use, generally, in solving for the project goal:
import select field data from Jira and GitHub to a Sheet, using Python
how do I know which libraries are best to import, like Tkinter, for functions that I will need (this one came up in search for creating dropdown lists in Python, so that the Repo names can be standardized).
seeing lots of references to REST-api, but we haven't talked about that in course yet
what are some quality resources to learn more about principles that I should understand better before attempting this project?
w3schools.com is on my radar, but it is also extensive -- not sure if there are resources honed in on this type of "challenge"

Related

Automatic Anki question and card creation

I'm really interested in some sort of program that allows me to highlight quotations from books and automatically formulates flashcards, questions, etc.
The time-consuming process of creating Anki flashcards makes it basically not worth if you're trying to remember massive amounts of information and if there aren't pre-made flashcards. Anki's great for university but I'm aiming to remember large amounts of academic information outside of my basic university course.
Web scraping scripts are great for getting basic, well-presented information (I, for example, created 7,000 Anki flashcards of French verb conjugation using a script which worked magnificently), but I'm basically looking for a fast way to put information in, have it sort statistics and basic phrases from the text and formulate questions. This is a pretty complex task I assume -- but I wonder whether some higher-level information learning platform like Wolfram-Alpha might work for programming such a thing?
I don't really know -- I'm not a coder. Just someone looking to learn massive amounts of information and automate the process.
Any solutions, recommendations, etc?
Thanks
A few words about creating content in Anki
You have to consider that creating notes with Anki is a skill, and as all skills, the more you do it, the better you become at it.
For this reason, a workflow that works well when you are trying to learn a lot of things that you can't simply scrape from somewhere is to add them by hand, and when you realize you're doing a repetitive task, create a script to automate it.
This is easier and more efficient than a general-purpose solution to automatically generate cards because such a solution does not exists: it's easy to write a script that targets a single repetitive process (even if it's only for a few hundred notes), and make one each time you feel you are doing a tedious, repetitive work, but it's very hard to make a full-featured script that works for everything.
Even if you don't know how to code, you can still go on the Anki forum and ask for help: people will happily do so. However, it would be easier if you learnt how to code. For instance, you could learn the basics of Python, which is a very easy-to-learn programming language, which is also very handy for automation scripts and which is used to write Anki add-ons.
Wolfram
Regarding Wolfram-Alpha: I am not an expert about it, but it's just a computer algebra program. Yes, it has been "pimped" with some (quite limited) natural language recognition, and its database also includes non-math content, but it's still just a symbolic computation program. It's not what you are looking for.
Incremental reading
However, I have the impression that what you are looking for is a way to process a lot of text, extract information out of it and create notes that make you learn it. This process is called incremental reading, and here is an article that explains what it is in details. There is an Anki add-on that will help you with that task in Anki. It's clearly not fully automated, far from it, but you can really process several thousands of articles with it.

Which language to use

I have zero experience with coding and I am wanting to begin learning. I want to create a project as I learn and need help in deciding which language is appropriate for what I am thinking about creating.
The application I have in mind is to search a pre-existing website that lists first and last names throughout multiple pages and run each of the names through another website utilizing its search funtions to see if any matches are found. If any matches are found I would like to be notified. This is just something I thought of when thinking about what I want my first project to be to make my daily job easier and less tedious. The process would not be something that is constantly running but one that I can run whenever I choose to see if any results are found with the current names. Any insight would be great on what programming language would be best for something like this.
This is of course an opinionated answer but I'd recommend python, because its:
- very easy to learn
- easy to setup
- has awesome web scraping library

Is it possible to use pull information from the web using data from excel?

I have an excel sheet I've received from someone with information about their personal library recently, and they've asked me to add the library of congress number to every book on the sheet. There's thousands of books on this thing, and it would take forever to search the library of congress website and copy-paste everything on here. Is there a built-in function or a way to have a column search the website for each book and copy the appropriate number?
Err... No.
This specific functionality (searching the library of congress) is so specific, that it makes no sense for Microsoft to add it to Excel. There would be probably 3 people in total that would EVER use it.
A more generic functionality (take an arbitrary webpage and look for some arbitrary information in it) would on the other hand be too vague. Either it would be useless or it would need to have a bazillion parameters to get it to do what you need to. And you'd need to spend months trying to configure it just right.
Actually, Excel does kinda offer this generic functionality - it's called the VBA. You can write custom programs there that can do pretty much anything under the sun. The downside - it's programming. If you know what you're doing, you can probably get it done in a couple of days, maybe a week. If you don't know... good luck. You'll need it!

How to write feature file and when to convert them to step definition to adapt to a changing business requirement?

I am working on a BDD web development and testing project with other team members.
On top we write feature files in gherkin and run cucumber to generate step functions. At bottom we write Selenium page models and action libraries scripts. The rest is just fill in the step functions with Selenium script and finally run cucumber cases.
Sounds simple enough.
The problem comes starting when we write feature files.
Problem 1: Our client's requirement keeps changing every week as the project proceed, in terms of removing old ones and adding new ones.
Problem 2: On top of that, for some features, detailed steps keep changing too.
The problem gets really bad if we try to generate updated step functions based on updated feature file every day. There are quite some housecleaning to do to keep step functions and feature files in sync.
To deal with problem 2, I remembered that one basic rule in writing gherkin feature file is to use business domain language as much as possible. So I tried to persuade the BA to write the feature file a little more vague, and do not include too many UI specific steps in it, so that we need not to modify feature files/step functions often. But she hesitate 'cause the client's requirement document include details and she just try to follow.
To deal with problem 1, I have no solution.
So my question is:
Is there a good way to write feature file so that it's less impacted by client's requirement change? Can we write it vague to omit some details that may change (this way at least we can stabilize the step function prototype), and if so, how far can we go?
When is a good time to generate the step definitions and filling in the content? From the beginning, or wait until the features stabilize a little? How often should we do it if the feature keep changing? And is there a convenient way to clean the outdated step functions?
Any thoughts are appreciated.
Thanks,
If your client has specific UI requirements for which you are contracted to provide automated tests, then you ought to be writing those using actual test automation tools. Cucumber is not a test automation tool. If you attempt to use it as such, you are simply causing yourself a lot of pain for naught.
If, however, you are only contracted to validate that your application complies with the business rules provided by your client, during frequent and focused discovery sessions with them, then Cucumber may be able to help you.
In either case, you are going to ultimately fail, if there's no real collaboration with your client. If they're regularly throwing new business rules, or new business requirements over a transome through which you have limited or no visibility, then you are in a no-win situation.

auto fill web form with dynamic data

I am trying to create shipping labels for a lot of different customers by filling forms on ups website. Is there a programmatic way of doing this?
It is different from the usual auto-fill web form. Because the name, address, etc. fields aren't filled with "constants". 100 customers needs 100 different forms.
Before I dig into python-mechanize, or autoit IE.au3, is there an easier way doing this?
UPDATE 2019-09-09: Generally, would no longer recommend FF.au3 unless you're very much into AutoIt or have tons of legacy code. I'd rather suggest using Selenium with a "proper" programming language.
You could check out FF.au3 for AutoIt. Together with FireFox and MozRepl it allows for web automation, including dynamic websites/forms.
The feature-set should be sufficient for your task (eg. XPath for content extraction and for filling out forms, but just have a look at the link and you'll get an idea of what it can do). It's also fairly easy to use.
The downside is that it's not the most performant approach and I've encountered a bug, but that doesn't say much. Overall it did work well for me for small or medium-ish projects.
Setup:
Install AutoIt: https://www.autoitscript.com/site/autoit-tools/
Get the FF.au3 lib: https://www.thorsten-willert.de/index.php/software/autoit/ff/ff-au3
Get an old Firefox version <v57 or ESR (see remarks on ff.au3 page above)
Install MozRepl: http://legacycollector.org/firefox-addons/264678/index.html

Resources