ImportError: No module named 'google.appengine' [duplicate] - python-3.x

I'm testing Google App Engine and trying to run a simple function to upload files to either the Blobstore or Cloud Storage. I'm typing the Python code directly in the Cloud Shell of my instance. The code is failing when I call:
from google.appengine.ext import blobstore
I get the error code:
Traceback (most recent call last):
File "upload_test.py", line 1, in <module>
from google.appengine.api import users
ImportError: No module named 'google.appengine'
Even though the documentation says that: You can use Google Cloud Shell, which comes with git and Cloud SDK already installed, I've tried installing a bunch of libraries:
gcloud components install app-engine-python
pip install google-cloud-datastore
pip install google-cloud-storage
pip install --upgrade google-api-python-client
I'm still getting the same error. How can I get the appengine library to work? Alternatively, is this the wrong method for creating an app that allows the user to upload files?

The google.appengine module is baked into the first-generation Python (2.7) runtime. It's not available to install via pip, in the second-generation (3.7) runtime, or in Cloud Shell.
The only way to use it is by writing and deploying a first-generation App Engine app.

Thanks #Dustin Ingram
I found the answer in this page.
The current "correct" way of uploading to Cloud Storage is to use google.cloud.storage. The tutorial I linked above explains how to implement it.
The impression I have, however, is that this uses twice the bandwidth as the solution via google.appengine. Originally, the front-end would receive an upload url and send the file directly to the Blobstore (or to Cloud Storage). Now the application uploads to the back-end which, in turn, uploads to Cloud Storage.
I'm not too worried, as I will not be dealing with excessively large files, but it seems strange that the ability to upload directly has been discontinued.
In any case, my problem has been solved.

Related

Using JTR in Cloud Functions?

I am trying JTR to brute force a pdf file.
The password of pdf is like First 4 Letters Last 4 Number ex: ABCD1234 or ZDSC1977
I've downloaded the jumbo source code from github and using pdf2john.pl i've extracted the hash.
But now by reading the documentation it says i need to configure and install john which is not going to work in my case.
Cloud Functions or firebase functions does not allow sudo apt get installs. and that's the reasone we can't use tools like popple utils which includes amazing pdftotext.
How can i use JTR in cloud functions properly without need of installation ?
is there any portable or prebuilt for ubuntu 18.04 version of JTR ?
It is important to keep in mind that you can't arrange for packages to be installed on Cloud Functions instances. This due to your code doesn't run with root privileges.
If you need binaries to be available to your code deployed to Cloud Functions, you will have to build it yourself for Debian, and include the binaries in your functions directory so it gets deployed along with the rest of your code.
Even if you're able to do that, there's no guarantee it will work, because the Cloud Fucntions images may not include all the shared libraries required for the executables to work.
You can request that new packages be added to the runtime using the Public Issue Tracker.
Otherway, you can use Cloud Run or Compute Engine.

lxml library in AWS Lambda

I've included this library as a layer to my lambda function but when I go to test it I get the error: cannot import name 'etree' from 'lxml'
There are multiple posts about people having this issue and some say to that I need to build it to compile some c libraries. Most posts say to look for another file or folder name 'lxml' which I've verified is not the issue.
I'm able to run the same code I've deployed to my layer on my local linux workstation and it runs without an issue.
Turns out my lambda was running python version 3.8 and that version is not compatible with the version of lxml I am using 4.5.1. Changing the run time to 3.7 fixed the issue. Hope this helps someone.

Deploy Python app with textract module to Google Cloud Platform

I want to create a Python script that will parse 40.000 PDF files(text and images). Since I saw that there is no easy method to check if a page contains images I think I should use textract module.
Ideally I would deploy to Google App Engine.
My question is, for textract I've also installed other packages beside Python to my system. Can I deploy the script(with proper requirements.txt file) on Google Cloud App Engine without problem? or I will to use something else?
It is possible to use App Engine, but only with the Flexible environment and using a custom runtime, which allows you to add non-python dependencies (and also python dependencies not installable via pip):
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
See also Building Custom Runtimes.

Why serverless lambda deploy will error with: No module named '_sqlite3' ?

There are other similar question like mine, but I think no one looks complete or fits/answer my case.
I'm deploying a Python 3.6 application on AWS lambda via serverless framework.
With this application I'm using diskcache to perform some small file caching (not using at all sqlite actually)
I'm using "serverless-python-requirements" plugin in order to have all my dependencies (defined in requirements.txt file) packed up and uploaded (diskcache in this case)
When application is live on AWS and I request it, I'll get back a 500 error. And in my logs I can read:
Unable to import module 'handler': No module named '_sqlite3'
Then from answer below I get that sqlite module should not be needed to be installed.
Python: sqlite no matching distribution found for sqlite
So no need (and it wont work) to add sqlite as a requirement...
Then I wonder why AWS lambda is unable to find sqlite once deployed.
Any hint pls?
Thanks

Where is the BlockBlobService Class Located in Python Azure Module?

I am pretty new to using the microsoft azure service and trying to follow the tutorial in https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python Using Python 3.5.6 in conda 4.5.11 distribution on a Windows PC.
The first problem I am facing while importing azure is I cannot see the version the usual way. That means
azure.__version__
gives an error.
Then, this line of the code gives me an error saying it can neither import names BlockBlobService, nor PublicAccess. Seems like both have been deprecated or I am myself using some old version.
from azure.storage.blob import BlockBlobService, PublicAccess #Option 1
However, the following import is working.
from azure.storage.blob import BlobService #Option 2
But the problem with this is after I create a local file and try to upload with a create_blob_from_path method (as advised in the tutorial), the method is either non-existent or deprecated.
I looked around the web for solution of this BlockBlobService issue, and seemed there has been a persistent confusion around the correct module hierarchy and class names to import. One user, for example, got some official documentation from the library which advised this, which also does not work.
from azure.storage import BlobService #Option 3
Still someone else reported some complaint with this, which is working on my system at least. But this does not import the needed Blob object.
import azure.storage.blob #Option 4
Further, according to this documentation, https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blockblobservice.blockblobservice?view=azure-python
the BlockBlobService class is located in azure.storage.blob.blockblobservice module. But the interpreter throws an import error when I try to import that as well.
Most of the proposed solutions are around some upgrading/downgrading of versions, but, silly me, somehow I cannot even find the version of the azure module like I do for other modules. Also, it seems many of the solutions are for pip3 running on Linux, whence I am using conda 4.5.11 on Windows. So how to make the azure API work?
As of November 2020, Azure maintains two versions of storage SDK:
v12 (Link)
v2.1 (Link)
2.1 is considered to be the legacy version of the API (Link):
This quickstart uses a legacy version of the Azure Blob storage client library. To get >started with the latest version, see Quickstart: Manage blobs with Python v12 SDK.
BlockBlobService is located in v2.1 and should be avoided. Use v12 version instead.
On windows, you should use pip install azure.
My environment is windows 10 with python 3.6.5, but I didn't use conda.
First, in cmd, run pip install azure, screenshot as below:
Then in pycharm, try use the from xxx import xx, screenshots as below:
For BlockBlobService:
For PublicAccess:
The BlockBlobService location:

Resources