Jenkins console prints only logger commands from main.py - python-3.x

I have a pipeline in Jenkins, which triggers a python file:
status = sh script: '''
python3 main.py --command check_artfiacts
''', returnStatus:true
as long I'm in the main.py, I'm getting the expected result from logger in the console:
2019-11-28 22:14:32,027 - __main__ - INFO - starting application from: C:\Tools\BB\jfrog_distribution_shared_lib\resources\com\amdocs\jfrog\methods, with args: C:\Tools\BB\jfrog_distribution_shared_lib\resources\com\amdocs\jfrog\methods
2019-11-28 22:14:32,036 - amd_distribution_check_artifacts_exists - INFO - Coming from func: build_aql_queries
however, when calling a function that exists on another python file, it doesn't work (it behaves like a normal print):
added helm to aql
amd_distribution_check_artifacts_exists: build_docker_aql_query_by_item
I know for sure it's some pipeline issue, coz when running the code from PyCharm, it prints everything as expected.
Did anyone face such an issue?

I found the solution in this thread:
jenkins-console-output-not-in-realtime
so -u worked for me:
python3 -u main.py --command check_artfiacts

Related

How can I catch unit-test errors outside test.py script?

I need to run in CI some run_test.py script:
import os
os.system("test1.py")
os.system("test2.py")
test1.py and test2.py - both unit-test scripts
Even if there is, for example, an ERROR in test1.py, run_test.py is still executed correctly and CI pipeline is successfully passed, though I need it to fail.
Is there any chance to catch test1.py unit-test errors so that CI pipeline will fall?
PS: running test1.py and test2.py independently in CI file doesn't
work, only works by run_test.py script

Passing arguments to python script in Azure Devops

I am trying to pass a system variable to the python script from azure devops. This is what I currently have in Yaml file:
- script: pytest test/test_pipeline.py
--$(global_variable)
--junitxml=$(Build.StagingDirectory)/test_pipeline-results.xml
displayName: 'Testing Pipeline'
condition: always()
The variable I need in my script is $(global_variable). The variable contains a value of $(Build.SourcesDirectory). It is the global variable. I am getting an error message as "unrecognised arguments" when I run the job.
Any help to tackle this will be helpful.
Thanks!
EDIT:
Complete log:
`##[section]Starting: Testing Pipeline
==============================================================================
Task : Command line
Description : Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
Version : 2.151.2
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
==============================================================================
Generating script.
Script contents:
pytest test/test_pipeline.py --my_param="/home/vsts/work/1/s" --junitxml=/home/vsts/work/1/a/test_pipeline-results.xml
========================== Starting Command Output ===========================
[command]/bin/bash --noprofile --norc /home/vsts/work/_temp/64fb4a65-90de-42d5-bfb3-58cc8aa174e3.sh
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --my_param=/home/vsts/work/1/s
inifile: None
rootdir: /home/vsts/work/1/s
##[error]Bash exited with code '4'.
##[section]Finishing: Testing Pipeline`
I tried to write a simple sample for you refer.
In your .py file, please use add_argument to read in the command line parameter. In this issue, this command line parameter comes from your task specified.
A.py file:
import argparse
def parse_argument():
parse = argparse.ArgumentParser(description="ForTest!")
parse.add_argument("-test_data")
parse.add_argument("-build_dir")
And then, in PythonScript#0 task:
scriptPath: ‘A.py’
argument: -test_data $(myVariable1) -build_dir $(myVariable2)
myVariable1 and myVariable2 are all my customized variable. You can all use environment variable.
Since in python script, add_argument is used to read in the parameter which from command line. So it can receive the value from Argument specified.
In addition, for the issue which in your question, I think you’d better delete the content from your script: --my_param="/home/vsts/work/1/s" and try again. Because --my_param could not the valid argument for pytest.
Hope can help you.

Python3 command taking forever to run in Airflow

I am calling a task that runs a python3 command. I have put a statement main is called in the first line of the if __name__ == '__main__': statement. However, this statement never gets executed. How am I assured that the file is to be called and everything else has before has been executed? By logs:
[2018-12-20 07:15:24,456] {bash_operator.py:87} INFO - Temporary script location: /tmp/airflowtmpah5gx32p/pscript_pclean_phlc9h6grzqdhm6sc0zrxjne_UdOgg0xdoblvr
[2018-12-20 07:15:24,456] {bash_operator.py:97} INFO - Running command: python3 /usr/local/airflow/rootfs/mopng_beneficiary_v2/scripts/pclean_phlc9h6grzqdhm6sc0zrxjne_UdOgg.py /usr/local/airflow/rootfs/mopng_beneficiary_v2/manual__2018-12-18T12:06:14+00:00/appended/euoEQHIwIQTe1wXtg46fFYok.csv /usr/local/airflow/rootfs/mopng_beneficiary_v2/external/5Feb18_master_ujjwala_latlong_dist_dno_so_v7.csv /usr/local/airflow/rootfs/mopng_beneficiary_v2/external/ppac_master_v3_mmi_enriched_with_sanity_check.csv /usr/local/airflow/rootfs/mopng_beneficiary_v2/manual__2018-12-18T12:06:14+00:00/pcleaned/Qc01sB1s1WBLhljjIzt2S0Ex.csv
[2018-12-20 07:15:24,467] {bash_operator.py:106} INFO - Output:

Stop submodules logging - running script from batch file

I have a python script with a cli argument parser (based on argparse)
I am calling it from a batch file:
set VAR1=arg_1
set VAR2=arg_2
python script.py --arg1 %VAR1% --arg2 %VAR2%
within the script.py I call a logger:
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
This script utilizes chromedriver, selenium and requests to automate some clicking and moving between web pages.
When running from within PyCharm (configured so that the script has arg_1 and arg_2 passed to it) everything is great - I get log messages from my logger only.
When I run the batch file - I get a bunch of logging messages from chromedriver or requests (I think).
I have tried:
#echo off at the start of the batch file.
Setting the level on the root logger.
Getting the logging logger dictionary and setting each logger to WARNING - based on this question.
None of these work and I keep getting logging messages from submodules - ONLY when run from a batch file.
Anybody know how to fix this?
You can use the following configuration options to do this
import logging.config
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': True,
})

Gitlab-CI succeeds on non-zero exit

Gitlab-CI seems allow the build to succeed even though the script is returning a non-zero exit. I have the following minimal .gitlab-ci.yml:
# Run linter
lint:
stage: build
script:
- exit 1
Producing the following result:
Running with gitlab-runner 11.1.0 (081978aa)
on gitlab-runner 72348d01
Using Shell executor...
Running on [hostname]
Fetching changes...
HEAD is now at 9f6f309 Still having problems with gitlab-runner
From https://[repo]
9f6f309..96fc77b dev -> origin/dev
Checking out 96fc77bb as dev...
Skipping Git submodules setup
$ exit 1
Job succeeded
Running on GitLab Community Edition 9.5.5 with gitlab-runner version 11.1.0. Closest post doesn't propose a resolution nor does this issue. A related question shows this setup should fail.
What are the conditions of failing a job? Isn't it a non-zero return code?
The cause of the problem was su was wrapped to call ksu as the shared machines are authenticated using Kerberos. In that case the wrapped ksu succeeds even though the script command might fail, indicating the job succeeded. This affected gitlab-runner since the shell executor was running su to run as the indicated user.

Resources