Is it possible to create a JupyterLab extension without using the cookiecutter template? - jupyter-lab

I've found plenty of resources online about creating a JupyterLab extension with the cookiecutter template (see tutorial https://jupyterlab.readthedocs.io/en/3.4.x/extension/extension_tutorial.html#extension-tutorial). I am wondering if anyone has tried to create this extension without using the cookiecutter package?
I've tried creating with the cookiecutter package- just trying to eliminate the need for this package.

Related

How can I use modules in Azure ML studio designer pipeline?

I am currently using a python script in my Azure pipeline
Import data as Dataframe --> Run Python Script --> Export Dataframe
My script is developed locally and I get import errors when trying to import tensorflow... No problem, guess I just have to add it to environment dependencies somewhere -- and it is here the documentation fails me. They seem to rely on the SDK without touching the GUI, but I am using the designer.
I have at this point already build some enviroments with the dependencies, but utilizing these environments on the run or script level is not obvious to me.
It seems trivial, so any help as to use modules is greatly appreciated.
To use the modules that are not preinstalled(see Preinstalled Python packages). You need to add the zipped file containing new Python packages on Script bundle. See below description in the document:
To include new Python packages or code, connect the zipped file that contains these custom resources to Script bundle port. Or if your script is larger than 16 KB, use the Script Bundle port to avoid errors like CommandLine exceeds the limit of 16597 characters.
Bundle the script and other custom resources to a zip file.
Upload the zip file as a File Dataset to the studio.
Drag the dataset module from the Datasets list in the left module pane in the designer authoring page.
Connect the dataset module to the Script Bundle port of Execute Python Script module.
Please check out document How to configure Execute Python Script.
For more information about how to prepare and upload these resources, see Unpack Zipped Data
You can also check out this similar thread.

Why few packages of python doesn't support in Azure Function V2?

I'm trying to publish my app to azure function from visual studio code,
and the following are my dependencies,
pyodbc==4.0.26
pandas==0.25.0
numpy==1.16.4
azure-eventhub==1.3.1
and when I'm publishing my app I get the following error,
ERROR: cannot install cryptography-2.7 dependency: binary dependencies without wheels are not supported. Use the --build-native-deps option to automatically build and configure the dependencies using a Docker container. More information at https://aka.ms/func-python-publish
This is a limitation of the way azure functions uses pip to download wheels. cryptography uploads an abi3 manylinux wheel, but this command can't successfully download it. For more information (and a workaround) see: https://github.com/Azure/azure-functions-core-tools/issues/1150
The link in the error message does answer your exact question:
If you're using a package that requires a compiler and does not support the installation of many linux-compatible wheels from PyPI, publishing to Azure will fail
If you ask for the "why was it designed in this way?" - that's a different question and out of scope for StackOverflow. You might want to try on the Functions Github

Deploy Python app with textract module to Google Cloud Platform

I want to create a Python script that will parse 40.000 PDF files(text and images). Since I saw that there is no easy method to check if a page contains images I think I should use textract module.
Ideally I would deploy to Google App Engine.
My question is, for textract I've also installed other packages beside Python to my system. Can I deploy the script(with proper requirements.txt file) on Google Cloud App Engine without problem? or I will to use something else?
It is possible to use App Engine, but only with the Flexible environment and using a custom runtime, which allows you to add non-python dependencies (and also python dependencies not installable via pip):
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
See also Building Custom Runtimes.

How to generate CLI documentation from python code?

I am using the argparse package and I am interested in generating documentation like the --help command does, but in HTML/LaTeX or any of the usual formats. I am using sphinx now for package reference creation, so an extension to that would be the best solution.

Python 3.5 support for Google-Contacts V3 API

I'm trying to work with the Google contacts API using Python 3.5, this presents an issue because the gdata library that is supposed to be used is not up to date for use with Python 3.5. I can use oAuth2 to grab the contact data in JSON and use that in my project, but part of the application is also adding a contact into the users contact list. I cannot find any documentation on this part, besides using the Gdata library, something I cannot do. The majority of project requires Python 3 so, switching to Python 2 would just not be something I could easily do. Is there any further documentation or a work around using the gdata library with Python 3? I'm actually very surprised that the contacts API seems so thinly supported on Python. If anyone has any further information it would be much appreciated.
For me I had to install like pip install git+https://github.com/dvska/gdata-python3 (without the egg). Since the package itself contains src dir. Otherwise import gdata would fail. (python 3.6.5 in virtual env)
GData Py3k version: pip install -e git+https://github.com/dvska/gdata-python3#egg=gdata

Resources