Google App Engine: cv2 libSM.so.6 error on Flask app - python-3.x

I have a Flask app that I want to deploy on the Google App Engine. Everything checks out fine, the requirements file contains all the modules which get installed successfully. But towards the end I get this error:
from .cv2 import *
ImportError: libSM.so.6: cannot open shared object file: No such file or directory
I read on various blogs and other similar stackoverflow questions that you need to install libsm6 using:
sudo apt-get install libsm6
But even after I did that, I still get the same error. How do I solve this?

The App Engine runtime has a fixed set of system packages that are included in the runtime and unfortunately libsm6 is not one of them. In addition, it's not possible to install additional system packages.
However, this is an ideal use case for Cloud Run, which lets you define your own runtime via a Dockerfile. See the quickstart for an example: https://cloud.google.com/run/docs/quickstarts/build-and-deploy

Related

npm sqlite3 package does not work when installing on linux / invalid ELF header

I am trying to get my development environment up and running for my Shopify app and it has been awhile. Shopify updates so rapidly I am forced to either reintegrate my code with a new Shopify app instance or try to get the original to work. Unfortunately, I cannot run my current application because sqlite3 will not work / install. I have tried installing from the source as recommended by the NPM documentation. I keep getting this error:
web/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64/node_sqlite3.node: invalid ELF header
Apparently this has to do with the binary not matching my system, which it is supposed to after running "npm install -g sqlite3 --build-from-source". Is there a way to install a specific build so that I can simply test all the linux builds to see if I can find a potential match or see if anything works?

Some AWS lambda functions stopped working with "No module named setuptools._distutils" error

I have an application with many serverless functions deployed to AWS lambda. These functions use Python 3.7 runtime environment. But yesterday after deploying some minor changes, few of these functions stopped working with the error.
[ERROR] Runtime.ImportModuleError: Unable to import module 'functions.graphql.lambda.user_balance': No module named 'setuptools._distutils'
The weird thing is that the functions which are throwing this error were not changed and other functions are working without any issues. No python module was added/removed.
Just to check if the code change has anything to do with this error, I tried deploying a previous version. But the error persists.
I used the serverless framework for deployment.
It looks like this is an issue that started happening for all Python users as of yesterday as the set up tools got updated, but PIP did not.
According to this GitHub issue there is a temporary workaround until this is actually fixed.
Setting environment variable SETUPTOOLS_USE_DISTUTILS=stdlib is a workaround, e.g.:
export SETUPTOOLS_USE_DISTUTILS=stdlib
pip3 install ....
My assumption would be that you could add this as an environment variable for the Lambda possibly through the serverless config?
This is the bug in setuptools https://github.com/pypa/setuptools/issues/2353. Follow the below temporary workaround.
Linux
export SETUPTOOLS_USE_DISTUTILS=stdlib
Windows
set SETUPTOOLS_USE_DISTUTILS=stdlib
After that, execute the pip command.
pip install XXXXX

Error from Azure function: ModuleNotFoundError

I followed the official documentation to set up my requirements.txt file. My understanding was that the function should be able to use modules if they are in requirements.txt. Here is an example of what that file looks like, with all the modules and their versions written in this way:
azure-common==1.1.12
azure-keyvault==1.0.0
azure-mgmt-keyvault==1.0.0
azure-mgmt-nspkg==2.0.0
azure-mgmt-resource==1.2.2
azure-storage-blob==12.3.1
azure-mgmt-subscription==0.5.0
azure-mgmt-network==10.2.0
azure-functions==1.2.1
However, when I look at the function's logs, I can see that it keeps throwing the error, "ModuleNotFoundError: No module named 'azure.common'". This is the first module I try to import in __init__.py. What am I doing wrong?
It seems the modules you use in your function are all old version(such as azure-common==1.1.12, azure-keyvault==1.0.0.....). So could you please install the modules with the latest version. You can search them on this page and for example if install the latest azure-common module, just run the command pip install azure-common(no need the version number) it will install the latest version of the module.
And then use the command below in your VS code to generate the "requirements.txt" automatically.
pip freeze > requirements.txt
Then deploy the function code from local to azure by the command:
func azure functionapp publish <function app name> --build remote
It will deploy the code to azure and install the modules according to the content in the "requirements.txt" which you generated just now.
Hope it helps~

Node package dependencies on IBM Cloud Foundry - require/module is not defined (Package not loading)

I am working on an application via the toolchain tool on IBM Cloud and editing the code via the Eclipse Orion IDE. As I am not accessing this through my local cli, my understanding is that in order to so call npm install {package}, I would just need to include the package in the package.json file under dependencies and require it in my app. However, when I load the application, I get the require is not defined indicating that the package has not been installed. Moreover, the require() is being used in the app.js file with the application being launched but not from files in my public directory.
After playing around further, it seems it might have to do with the way the directory tree is being traced as the error is only thrown in subdirectories. For example, require('express') works in app.js which is in the main directory ./ but fails when it is called in test.js in ./subdirectory/test.js. I feel like I'm missing something painfully simple like configuration of endpoint or something.
I've been searching around but I can't seem to find how to get the packages loaded, preferably without using the cli. Appreciate any pointers. Thanks!
Update: After playing around further, I am also getting module is not defined error when trying to require from another file in the same directory. For example module.exports = 'str' returns this error. While trying to require('./file') returns the require is not defined. It might have to do with how node is wrapping the functions?
Update 2: Tried "start": "npm install && node app.js" in package.json but no luck. Adding a build stage which calls npm install before deployment also does not work
Update 3: After adding npm install build stage, I am able to see that the dependencies have been successfully built via the logs. However, the require is not defined error still persists.
Update 4: Trying npm install from my CLI doesn't work as well even though all packages and dependencies are present
Update 5: Running cf restage or configuring cache via cacheDirectories does not work as well
Opened a related question regarding deployment here
Found out my confusion was caused due to me not realizing that require() cannot be used on the client side unless via tools such as Browserify.

Domino10 appDevPack: "Error: Cannot find module '#domino/domino-db'"

Just installed the latest Domino 10.0.1 Server on my linux machine and also installed and configured the latest proton package. As far as I can tell it's all running fine.
Next I plan to try my first Node-RED flow using the new Domino10 nodes. So I installed the 'node-red-contrib-dominodb' palette.
Finally tried my first very simple flow trying to query node-demo.nsf as it's described here. From what I read there I assumed that it's sufficient to install the palette, but that obviously is not the case:
as soon as I hit 'Deploy' I receive this error:
Error: Cannot find module '#domino/domino-db'
So I thought that I maybe still have to do a global install in node.js using
npm install -g <package-path>/domino-domino-db-1.1.0.tgz
This indeed created a local #domino/domino-db module inside my node.js npm\node_modules folder. But obviously my node-red environment doesn't know about it.
Question is: how do I register / install that npm package for my local node-red environment?
IBM's instructions (https://flows.nodered.org/node/node-red-contrib-dominodb#Installation)
Say to go view this guide(https://github.com/stefanopog/node-red-contrib-dominodb/blob/master/docs/Using%20the%20new%20Domino%20V10%20NodeRED%20nodes%202.pdf) for installing the domino-db module.
The link is broken, here's an old copy: https://github.com/stefanopog/node-red-contrib-dominodb/blob/a723ef88498c5bfa243abd956a7cc697f0a42610/docs/Using%20the%20new%20Domino%20V10%20NodeRED%20nodes%202.pdf
I believe the section you want is called "Import the tarball". The steps before that require you to unpack and then re-pack the module... which is unnecessary. Just use the tgz that was in the AppDev Pack to begin with.

Resources