VSCode test explorer stops discovering tests when I add an import to python code file - python-3.x

This python code file works perfectly. But when I add either of the commented imports, the vscode test feature gives "No tests discovered, please check the configuration settings for the tests." No other errors.
# import boto3
# import pymysql
import decimal
import datetime
def increment(x):
return x + 1
def decrement(x):
return x - 1
What is it that I don't understand about imports and the test feature that explains why these would break the test explorer?

Related

Error Launching Blob Trigger function in Azure Functions expected str, bytes or os.PathLike object, not PosixPath

My problem is: I try to execute a fresh uploaded python function in an Azure Function App service and launch it (no matter if I use blob trigger or http trigger) I allways get the same error:
Exception while executing function: Functions.TestBlobTrigger Result: Failure
Exception: TypeError: expected str, bytes or os.PathLike object, not PosixPath
Stack: File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/dispatcher.py", line 284, in _handle__function_load_request
func = loader.load_function(
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 40, in call
return func(*args, **kwargs)
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/loader.py", line 53, in load_function
register_function_dir(dir_path.parent)
File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/loader.py", line 26, in register_function_dir
_submodule_dirs.append(fspath(path))
Why is this happening: when the function is successfully deployed I upload a file in a blob in order to trigger the function but I get allways the same error, caused by the pathlib library (https://pypi.org/project/pathlib/). I have written a very easy function that works in my local vscode and it just prints a message.
import logging
import configparser
import azure.functions as func
from azure.storage.blob import BlockBlobService
import os
import datetime
import io
import json
import calendar
import aanalytics2 as api2
import time
import pandas as pd
import csv
from io import StringIO
def main(myblob: func.InputStream):
logging.info("BLob Trigger function Launched ");
blob_bytes = myblob.read();
blobmessage=blob_bytes.decode()
func1 = PythonAPP.callMain();
func1.main(blobmessage);
The Pythonapp class is:
class PythonAPP:
def __init__(self):
logging.info('START extractor. ');
self.parameter="product";
def main(self,message1):
var1="--";
try:
var1="---";
logging.info('END: ->paramet '+str(message1));
except Exception as inst:
logging.error("Error PythonAPP.main : " + str(inst));
return var1;
My requirements.txt file is:
azure-storage-blob== 0.37.1
azure-functions-durable
azure-functions
pandas
xlrd
pysftp
openpyxl
configparser
PyJWT==1.7.1
pathlib
dicttoxml
requests
aanalytics2
I've created this simple function in order to check if I can upload the simpliest example in Azure Functions, is there any dependencies that am I forgetting?
Checking the status of the functions I found this:
------------UPDATE1--------------------
The function is failing because the pathlib import, this is because in the requirements of the function it downloads this library and fails with AZ functions. Please see the requirements.txt file in the following link: https://github.com/pitchmuc/adobe_analytics_api_2.0/blob/master/requirements.txt
Can I exlude it somehow?
Well I can't provide an answer for that, I made a walkarround. In this case I created a copy of the library in a github repository. In this copy I erased the references to the pathlib in the requrements.txt and setup.py because this libary causes the failure in the AZ function APPS. By the way in the requirements file of the proyect make a reference to the project, so please mind the requiremnts file that I wrote above and change aanalytics2 reference to:
git+git://github.com/theURLtotherepository.git#master#egg=theproyectname
LINKS.
I've checked a lot of examples in google but none of them helped me:
Azure function failing after successfull deployment with OSError: [Errno 107]
https://github.com/Azure/azure-functions-host/issues/6835
https://learn.microsoft.com/en-us/answers/questions/39865/azure-functions-python-httptrigger-with-multiple-s.html
https://learn.microsoft.com/en-us/answers/questions/147627/azure-functions-modulenotfounderror-for-python-scr.html
Missing Dependencies on Python Azure Function --> no bundler flag or –build remote
https://github.com/OpenMined/PySyft/issues/3400
This is a bug in the azure codebase itself; specifically
within:
azure-functions-python-worker/azure_functions_worker/dispatcher.py
the problematic code within dispatcher looks to be setting up the exposed functions with the metadata parameters found within function.json
you will not encounter the issue if you're not using pathlib within your function app / web app code
if pathlib is available the issue manifests
rather than the simple os.path strings pathlib.Path objects are exposed - deeper down in the codebase there looks to be a conditional import and use of pathlib
to resolve simply remove the pathlib from your requirements.txt and redeploy
you'll need to refactor any of your own code that used pathlib - use equivalent methods in the os module
there looks to have been a ticket opened to resolve this around the time of the OP post - but it is notresolved in the current release

JupyterLab 3: how to get the list of running servers

Since JupyterLab 3.x jupyter-server is used instead of the classic notebook server, and the following code does not list servers served with jupyter_server:
from notebook import notebookapp
notebookapp.list_running_servers()
None
What still works for the file/notebook name is:
from time import sleep
from IPython.display import display, Javascript
import subprocess
import os
import uuid
def get_notebook_path_and_save():
magic = str(uuid.uuid1()).replace('-', '')
print(magic)
# saves it (ctrl+S)
# display(Javascript('IPython.notebook.save_checkpoint();')) # Javascript Error: IPython is not defined
nb_name = None
while nb_name is None:
try:
sleep(0.1)
nb_name = subprocess.check_output(f'grep -l {magic} *.ipynb', shell=True).decode().strip()
except:
pass
return os.path.join(os.getcwd(), nb_name)
But it's not pythonic nor fast
How to get the current running server instances - and so e.g. the current notebook file?
Migration to jupyter_server should be as easy as changing notebook to jupyter_server, notebookapp to serverapp and changing the appropriate configuration files - the server-related codebase is largely unchanged. In the case of listing servers simply use:
from jupyter_server import serverapp
serverapp.list_running_servers()

OpenMDAO and NSGA II

I found some interesting code in openmdao\drivers\tests\test_pyoptsparse_driver.py that seems to reference NSGA-II. I noticed that this is not implemented when I tried running the test code.
import sys
import copy
import unittest
sys.path.insert(0,r"[SOMEPATH Here]\GitHub\OpenMDAO")
from distutils.version import LooseVersion
import numpy as np
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
from openmdao.test_suite.components.expl_comp_array import TestExplCompArrayDense
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
# from openmdao.utils.assert_utils import assert_near_equal # NOTE: THIS FUNCTION ISN'T AVAILABLE IN THE PIP INSTALL
from openmdao.utils.general_utils import set_pyoptsparse_opt, run_driver
from openmdao.utils.testing_utils import use_tempdirs
from openmdao.utils.mpi import MPI
_, local_opt = set_pyoptsparse_opt('NSGA2')
if local_opt != 'NSGA2':
raise unittest.SkipTest("pyoptsparse is not providing NSGA2") # CODE BASICALLY FAILS HERE
Error that I am seeing:
"pyoptsparse is not providing NSGA2"
Can I add NSGA 2 if it's not available?
when that test was written, NSGA-II was a little difficult to compile with pyoptsparse. I think there are still some challenges with it, but it mostly works now. As of OpenMDAO V3.0 we're not using NSGA-II for anything internally. But if you get it to work, feel free to send a PR with an updated test!

Robo framework test fails when try to import other module in test or keyword file

I am trying to set up custom test library and user keyword using robot framework.
I created my test file or userkeyword file as below hello.py.
please look at the import statement in the hello.py.
if i comment out import statement(import
src.factory.serverconnectfactory) and run - test case is PASS **
if i uncomment import statement - test case is FAIL. don't know why it
fails ?
I did try to google a lot did not find any clue , even read the user guide of robot framework
not able to get any clue.
How can I import src.factory.serverconnectfactory and use it in hello.py ?
example : hello.py
from robot.api.deco import keyword
# import src.factory.serverconnectfactory ---- this line is the issue
class helloworld:
def __init__(self):
pass
#keyword("print hello")
def printhelloworld(self):
return "Hello World"
my robot test file
****** Settings ***
Library ..//..//keywordslib//helloworld.py
*** Test Cases ***
This is some test case
print hello
the error i use to get is
> Importing test library failed: ModuleNotFoundError: No module named
thanks in advance.
Solution :
Robot Framework expects Filename and classname to be same

ipython notebook --script deprecated. How to replace with post save hook?

I have been using "ipython --script" to automatically save a .py file for each ipython notebook so I can use it to import classes into other notebooks. But this recenty stopped working, and I get the following error message:
`--script` is deprecated. You can trigger nbconvert via pre- or post-save hooks:
ContentsManager.pre_save_hook
FileContentsManager.post_save_hook
A post-save hook has been registered that calls:
ipython nbconvert --to script [notebook]
which behaves similarly to `--script`.
As I understand this I need to set up a post-save hook, but I do not understand how to do this. Can someone explain?
[UPDATED per comment by #mobius dumpling]
Find your config files:
Jupyter / ipython >= 4.0
jupyter --config-dir
ipython <4.0
ipython locate profile default
If you need a new config:
Jupyter / ipython >= 4.0
jupyter notebook --generate-config
ipython <4.0
ipython profile create
Within this directory, there will be a file called [jupyter | ipython]_notebook_config.py, put the following code from ipython's GitHub issues page in that file:
import os
from subprocess import check_call
c = get_config()
def post_save(model, os_path, contents_manager):
"""post-save hook for converting notebooks to .py scripts"""
if model['type'] != 'notebook':
return # only do this for notebooks
d, fname = os.path.split(os_path)
check_call(['ipython', 'nbconvert', '--to', 'script', fname], cwd=d)
c.FileContentsManager.post_save_hook = post_save
For Jupyter, replace ipython with jupyter in check_call.
Note that there's a corresponding 'pre-save' hook, and also that you can call any subprocess or run any arbitrary code there...if you want to do any thing fancy like checking some condition first, notifying API consumers, or adding a git commit for the saved script.
Cheers,
-t.
Here is another approach that doesn't invoke a new thread (with check_call). Add the following to jupyter_notebook_config.py as in Tristan's answer:
import io
import os
from notebook.utils import to_api_path
_script_exporter = None
def script_post_save(model, os_path, contents_manager, **kwargs):
"""convert notebooks to Python script after save with nbconvert
replaces `ipython notebook --script`
"""
from nbconvert.exporters.script import ScriptExporter
if model['type'] != 'notebook':
return
global _script_exporter
if _script_exporter is None:
_script_exporter = ScriptExporter(parent=contents_manager)
log = contents_manager.log
base, ext = os.path.splitext(os_path)
py_fname = base + '.py'
script, resources = _script_exporter.from_filename(os_path)
script_fname = base + resources.get('output_extension', '.txt')
log.info("Saving script /%s", to_api_path(script_fname, contents_manager.root_dir))
with io.open(script_fname, 'w', encoding='utf-8') as f:
f.write(script)
c.FileContentsManager.post_save_hook = script_post_save
Disclaimer: I'm pretty sure I got this from SO somwhere, but can't find it now. Putting it here so it's easier to find in future (:
I just encountered a problem where I didn't have rights to restart my Jupyter instance, and so the post-save hook I wanted couldn't be applied.
So, I extracted the key parts and could run this with python manual_post_save_hook.py:
from io import open
from re import sub
from os.path import splitext
from nbconvert.exporters.script import ScriptExporter
for nb_path in ['notebook1.ipynb', 'notebook2.ipynb']:
base, ext = splitext(nb_path)
script, resources = ScriptExporter().from_filename(nb_path)
# mine happen to all be in Python so I needn't bother with the full flexibility
script_fname = base + '.py'
with open(script_fname, 'w', encoding='utf-8') as f:
# remove 'In [ ]' commented lines peppered about
f.write(sub(r'[\n]{2}# In\[[0-9 ]+\]:\s+[\n]{2}', '\n', script))
You can add your own bells and whistles as you would with the standard post save hook, and the config is the correct way to proceed; sharing this for others who might end up in a similar pinch where they can't get the config edits to go into action.

Resources