Python invoke: Get the flags from a parent command? - pyinvoke

Given the following python invoke script:
from invoke import task
#task
def pre(c):
print("pre")
#task(pre=[pre])
def command(c, flag):
print(f"command flag={flag}")
Called with the following shell command:
inv command --flag
I would like to read the value of flag to conditionally do some actions in pre. Is there a way I can read the flag property passed to command from within pre using invoke's API? I couldn't find anything in the docs about it.
I am aware that push comes to shove, I can import sys and read the args directly, but I'd rather avoid doing that work manually if I can.

After reading through a few GH issues, I found that the feature has yet to be implemented at this point in time.
Discussion can be found here, and a PR for a similar feature can be found here.
The proposed solution in the first discussion is to call the pre/post task directly in the main task:
#task
def pre(c, flag):
print("pre")
#task
def command(c, flag):
pre(c, flag)
print(f"command flag={flag}")

Related

Is there a better/more pythonic way to load an arbitrary set of functions from modules in another folder?

I'm just basically asking:
if it's considered OK to use exec() in this context
if there's a better/more pythonic solution
for any input or comments on how my code could be improved
First, some context. I have main.py which basically takes input and checks to see if I've written a command. Let's say I type '/help'. The slash just tells it my input was supposed to be a command, so then it checks if a function called 'help' exists, and if so, that function will be run.
To keep things tidy in main.py, and to allow myself to add more commands easily, I have a 'commands' directory, with individual command files in it, such as help.py. help.py would look like this for example:
def help():
print("You've been helped")
So then of course, I need to import help() from help.py, which was trivial.
As I added more commands, I decided to add an init.py file where I'd keep all the command import lines of code, and then just do 'from init import *' in main.py. At first, each time I added a command, I'd add another line in init.py to import it. But that wasn't as flexible as I wanted, so I thought, there's got to be a way to just loop through all the .py files in my commands directory and import them. I struggled with this for a while but came up with a solution that works.
In the init.py snippet below, I loop through the commands directory (and a couple others, but they're irrelevant to the question), and you'll see I use the dreaded exec() function to actually import the commands.
loaded, failed = '', ''
for directory in command_directories:
command_list = os.listdir(directory)
command_list.sort()
for command_file in command_list:
if command_file.endswith(".py"):
command_name = command_file.split(".")[0]
try:
# Evil exec() hack to use variable-name directories/modules
# Haven't found a more... pythonic... way to do this
exec(f"from {directory}.{command_name} import {command_name}")
loaded = loaded + f" - Loaded: {command_name}\n"
except:
failed = failed + f" - Failed to load: {command_name}\n"
if debug == True:
for init_debug in [loaded, failed]: print(init_debug)
I use exec() because I don't know a better way to make a variable with the name of the function being loaded, so I use {command_name} in my exec string to arbitrarily evaluate the variable name that will store the function I'm importing. And... well, it works. The functions work perfectly when called from main.py, so I believe they are being imported in the correct namespace.
Obviously, exec() can be dangerous, but I'm not taking any user input into it, just file names. Filenames that I only I am creating. This program isn't being distributed, but if it was, then I believe using exec() would be bad since there's potential someone could exploit it.
If I'm wrong about something, I'd love to hear about it and get suggestions for a better implementation. Python has been very easy to pick up for me, but I'm probably missing some of the fundamentals.
I should note, I'm running python 3.10 on replit (until I move this project to another host).

How to Avoid Pylint Error For Not Passing Argument to Celery Bound Method?

The project is meant to serve a machine learning model via a REST API. Here is the celery task declaration, along with a unit test to check the function is running on a sample input.
#celery.task(bind=True)
def run_prediction(self, inpath: str) -> pandas.DataFrame:
'''
:param inpath File location of the input feature
'''
# Do a lot of stuffs
return output # type:pandas.DataFrame
if __name__=='__main__':
# Do a unit test with a sample inpath
pred:pandas.DataFrame=run_prediction(inpath=sample_inpath)
The run_prediction function can be triggered as a stand-alone normal function without caring about celery, and also as an asynchronous method with the apply_async provided by celery. (I did not expect it to have the functionality of a standalone function, so any explanation on that would be great.)
But the main issue is when I check the code via pylint (version 2.5.3), it throws an error and a warning, both saying the same thing.
model_utility.py:92:34: W0613: Unused argument 'self' (unused-argument)
model_utility.py:188:4: E1120: No value for argument 'self' in function call (no-value-for-parameter)
Does it mean somehow pylint is not aware of how to call the function? Can I update the code to have a 10/10 from pylint? Or should I change something about pylint itself?
For your information, using pylint as a package from my editor (Atom), if that is important to the discussion.

Executing a python script in snaplogic

I am trying to run a python script through script snap in snaplogic. I am facing some issues where it ask me to declare a script hook variable. Can you please help me on that.
With the script snap you should use the "Edit Script" button on the snap itself. This will open a script editor and generate a skeleton script in the language you've selected (Py in this case).
In the skeleton you can see the baseline methods and functions we define. In there you can see the usage and comments of the scripthook var. If you have an existing script I would recommend trying to write it into this skeleton's execute method than trying to implement scripthook in your existing code. You can also define your own methods and functions within the confines of the skeleton class and reference them with "this." notation.
For faster answers on SnapLogic related questions I'd recommend visiting the SnapLogic Community site.
As explained by #dwhite0101, within Script Snap when you click Edit Script you get an option to generate code template.
ScriptHook is an interface that is implemented as callback mechanism for Script Snap to call into the script.
It helps you to deal with input and output rows. The constructor below initializes the input, output, error and log variables.
self object is similar to this in c++ that holds the current row values.
class TransformScript(ScriptHook):
def __init__(self, input, output, error, log):
self.input = input
self.output = output
self.error = error
self.log = log
You can perform transformations in execute method:
def execute(self):
self.log.info("Executing Transform script")
while self.input.hasNext():
in_doc = self.input.next()
wrapper = java.util.HashMap()
for field in in_doc:
#your code
Next step is to store your results in an object and output it:
wrapper['original'] = result
self.output.write(result, wrapper)
Make sure to indent your code properly.

Read file from Jenkins workspace with System groovy script

I have a question very similar to this: Reading file from Workspace in Jenkins with Groovy script
However I need to read the file from a System Groovy script so the solution of using Text-finder or the Groovy PostBuild plugin will not work.
How can I get the workspace path from a system groovy script? I have tried the following:
System.getenv('WORKSPACE')
System.getProperty("WORKSPACE")
build.buildVariableResolver.resolve("WORKSPACE")
Thanks!
If you have a file called "a.txt" in your workspace, along with a script called "sysgvy.groovy" that you want to execute as a system groovy script. Suppose your "sysgvy.groovy" script needs to read the file "a.txt".
The issue of this topic is that if your script read "a.txt" directly without providing any path, "sysgvy.groovy" executes and will throw an error saying cannot find "a.txt".
I have tested and found that the following method works good.
def build = Thread.currentThread().executable
Then use
build.workspace.toString()+"\\a.txt"
as the full location string to replace "a.txt".
It's also important to run on the Jenkins master machine by placing "a.txt" and "sysgvy.groovy" onto Jenkins master machine's workspace. Executing on slave machine does not work.
Try it, the file should be found and get read in the script without any problem.
If there is problem with variable Thread, it is just that some modules need to be imported. So add these lines to the start of code:
import jenkins.*
import jenkins.model.*
import hudson.*
import hudson.model.*
Each build has a workspace, so you need to find the desired project first. (The terms "job" and "project" are used rather interchangeable in Jenkins - also in the API.)
After that, you can either cross your fingers and just call getWorkspace(), which is deprecated (see JavaDoc for details).
Or you can find a specific build (e.g. the last), which can give you the workspace used for that specific build via the getWorkspace() method as it is defined in the AbstractBuild class.
Example code:
Jenkins.instance.getJob('<job-name>').lastBuild.workspace;
Just use
build.workspace
The "build" variable is available keyword in System Groovy Script.

Passing build parameters downstream using groovy. Jenkins build pipeline

I've seen a number of examples that execute a pre build system groovy script to the effect of
import hudson.model.*
def thr = Thread.currentThread()
def build = thr?.executable
printf "Setting SVN_UPSTREAM as "+ build.getEnvVars()['SVN_REVISION'] +"\n" ;
build.addAction(new ParametersAction(new StringParameterValue('SVN_UPSTREAM', build.getEnvVars()['SVN_REVISION'])))
Which is intended to make SVN_UPSTREAM available to all downstream jobs.
With this in mind I attempt to use $SVN_UPSTREAM in a manually executed downstream job like
https://code.mikeyp.com/svn/mikeyp/client/trunk#$SVN_UPSTREAM
Which is not resolved causing an error.
Can anyone spot the problem here?
The bleeding edge jenkins build pipeline plugin now supports parameter passing. Eliminated the need for the groovy workaround for me.
Make sure that the parameter you are passing downstream is not set as a parameter in the downstream job where you wish to use it. That is, in the downstream job, if you have "This build is parametrized" checked, do not add SVN_UPSTREAM to the list of parameters. If you do, it overrides the preset value.

Resources