How to test google drive API python client - python-3.x

I currently have a google drive API client in my django project that works as expected.
import unittest
from unittest import mock
DRIVE_API_VERSION = "v3"
DRIVE_API_SERVICE_NAME = "drive"
DRIVE_AUTHORIZED_USER_FILE = "path/to/secrets/json/file"
DRIVE_SCOPES = ['https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file ', 'https://www.googleapis.com/auth/drive.appdata']
def construct_drive_service():
try:
drive_credentials = google.oauth2.credentials.Credentials.from_authorized_user_file(
DRIVE_AUTHORIZED_USER_FILE, scopes=DRIVE_SCOPES)
except FileNotFoundError:
print('Drive credentials not created')
pass
if drive_credentials:
return build(DRIVE_API_SERVICE_NAME, DRIVE_API_VERSION, credentials=drive_credentials, cache_discovery=False)
else:
return None
The challenge for me now is to write tests for this function. But I don't know what strategy to use. I have tried this
class TestAPICalls(unittest.TestCase):
#mock.patch('api_calls.google.oauth2.credentials', autospec=True)
def setUp(self, mocked_drive_cred):
self.mocked_drive_cred = mocked_drive_cred
#mock.patch('api_calls.DRIVE_AUTHORIZED_USER_FILE')
def test_drive_service_creation(self, mocked_file):
mocked_file.return_value = "some/file.json"
self.mocked_drive_cred.Credentials.return_value = mock.sentinel.Credentials
construct_drive_service()
self.mocked_drive_cred.Credentials.from_authorized_user_file.assert_called_with(mocked_file)
But my tests fail with the below error
with io.open(filename, 'r', encoding='utf-8') as json_file:
ValueError: Cannot open console output buffer for reading
I know that the client is trying to read a file but is getting a mock object instead. Problem is I have no idea how to go about
solving this problem.
I have been reading up on the mock library but the whole thing is still hazy.

Related

Python-asyncio and subprocess deployment on IIS: returning HTTP response without running another script completely

I'm facing an issue in creating Realtime status update for merging new datasets with old one and machine learning model creation results via Web framework. The tasks are simple in following steps.
An user/ client will send a new datasets in .CSV file to the server,
On server side my windows machine will receive a file then send an acknowledge,
Merge the new dataset with the old one for new machine learning model creation and
Run another python script(that is to create a new sequential deep-learning model). After the successful completion of another python script my code have to return the response to the client!
I have deployed my python-flask application on IIS-10. To run an another python script, this main flask-api script should have to wait for completing that model creation script. On model creation python script it contains several process like loading datasets, tokenizing, oneHot Encoding, padding techniques, model training for 100 epochs and finally prediction results.
My exact goal is this Flask-API should have to wait for until completing the entire process. I'm sure definitely it will take 8-9 minutes to complete the whole script mentioned in subprocess.run(). While testing this code on development mode it's working excellently without any issues! But while testing it on production mode on IIS no it's not waiting for the whole process and within 6-7 seconds it returning response to the client.
For debugging purpose I included logging to record all events in both Flask script and machine learning model creation script! Through that I came to understand that model creation script only ran 10%!. First I tried simple methods with async def and await to run the subprocess.run() it didn't make any sense! Then I included threading and get_event_loop() and then run_until_complete() to make my parent code wait until finishing the whole process. But finally I'm helpless!! I couldn't able to find a rightful solution. Please let me know what I did wrong.. Thank you.
Configurations:
Python 3.7.9
Windows server 2019 and
IIS 10.0 Express
My code:
import os
import time
import glob
import subprocess
import pandas as pd
from flask import Flask, request, jsonify
from werkzeug.utils import secure_filename
from datetime import datetime
import logging
import asyncio
from concurrent.futures import ThreadPoolExecutor
ALLOWED_EXTENSIONS = {'csv', 'xlsx'}
_executor = ThreadPoolExecutor(1)
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = "C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets"
currentDateTime = datetime.now()
filenames = None
logger = logging.getLogger(__name__)
app.logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(name)s:%(message)s')
file_handler = logging.FileHandler('model-creation-status.log')
file_handler.setFormatter(formatter)
# stream_handler = logging.StreamHandler()
# stream_handler.setFormatter(formatter)
app.logger.addHandler(file_handler)
# app.logger.addHandler(stream_handler)
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
#app.route('/file_upload')
def home():
return jsonify("Hello, This is a file-upload API, To send the file, use http://13.213.81.139/file_upload/send_file")
#app.route('/file_upload/status1', methods=['POST'])
def upload_file():
app.logger.debug("/file_upload/status1 is execution")
# check if the post request has the file part
if 'file' not in request.files:
app.logger.debug("No file part in the request")
response = jsonify({'message': 'No file part in the request'})
response.status_code = 400
return response
file = request.files['file']
if file.filename == '':
app.logger.debug("No file selected for uploading")
response = jsonify({'message': 'No file selected for uploading'})
response.status_code = 400
return response
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
print(filename)
print(file)
app.logger.debug("Spreadsheet received successfully")
response = jsonify({'message': 'Spreadsheet uploaded successfully'})
response.status_code = 201
return response
else:
app.logger.debug("Allowed file types are csv or xlsx")
response = jsonify({'message': 'Allowed file types are csv or xlsx'})
response.status_code = 400
return response
#app.route('/file_upload/status2', methods=['POST'])
def status1():
global filenames
app.logger.debug("file_upload/status2 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
filenames = request.get_json()
app.logger.debug(filenames)
filenames = filenames['data']
# print(filenames)
folderpath = glob.glob('C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets\\*.csv')
latest_file = max(folderpath, key=os.path.getctime)
# print(latest_file)
time.sleep(3)
if filenames in latest_file:
df1 = pd.read_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets\\" +
filenames, names=["errors", "solutions"])
df1 = df1.drop(0)
# print(df1.head())
df2 = pd.read_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\existing_tickets.csv",
names=["errors", "solutions"])
combined_csv = pd.concat([df2, df1])
combined_csv.to_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\new_tickets-chatdataset.csv",
index=False, encoding='utf-8-sig')
time.sleep(2)
# return redirect('/file_upload/status2')
return jsonify('New data merged with existing datasets')
#app.route('/file_upload/status3', methods=['POST'])
def status2():
app.logger.debug("file_upload/status3 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
message = request.get_json()
message = message['data']
app.logger.debug(message)
return jsonify("New model training is in progress don't upload new file")
#app.route('/file_upload/status4', methods=['POST'])
def model_creation():
app.logger.debug("file_upload/status4 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
message = request.get_json()
message = message['data']
app.logger.debug(message)
app.logger.debug(currentDateTime)
def model_run():
app.logger.debug("model script starts to run")
subprocess.run("python C:\\.....\\IT_support_chatbot-master\\"
"Python_files\\main.py", shell=True)
# time.sleep(20)
app.logger.debug("script ran successfully")
async def subprocess_call():
# run blocking function in another thread,
# and wait for it's result:
app.logger.debug("sub function execution starts")
await loop.run_in_executor(_executor, model_run)
asyncio.set_event_loop(asyncio.SelectorEventLoop())
loop = asyncio.get_event_loop()
loop.run_until_complete(subprocess_call())
loop.close()
return jsonify("Model created successfully for sent file %s" % filenames)
if __name__ == "__main__":
app.run()

is there a way to send multiple images to an API at the same time fastapi

I need to hit the fastapi with multiple images
#app.post("/text")
def get_text(files: List[UploadFile] = File(...)):
its working when I try uploading multiple images using /docs interface, I tried with one file its working fine here is the code for it
import requests
import json
def get_text(image_path):
#images={}
url = 'http://address/text'
try:
with open(image_path, "rb") as im:
image_data={"files":im}
response=requests.post(url,files=image_data)
return json.loads(response.text)
except Exception as er:
print("error occured")
return "{} error occured".format(er)
when I tried adding more images to the image_data but i am getting error.
image_data ={"files":[]}
for image in image_list:
with open(image, "rb") as im:
image_data['files'].append(im)
tried above code but no use.
error message after running the above
error message
I finally found the solution , its not the problem of fastapi , its related to requests library
Incase any one needs the solution here it is
files = [
('files', ('image1', open('/Users/ai/image1.jpg','rb'), 'image/png')),
('files', ('image2', open('/Users/ai/image2.jpeg','rb'), 'image/png'))
]
you can use the below function for multiple files
import requests
import json
def get_text(image_list,url):
try:
image_data=[]
for image in image_list:
image_data.append(('files',(image.split('/')[-1],open(image,'rb'),'image/png')))#('files',(image_name,open image,type))
response=requests.post(url,files=image_data)
return json.loads(response.text)
except Exception as er:
print("error occured")
return "{} error occured".format(er)
You can check docs Here
Thanks..!

Python unittest.mock google storage - how to achieve exceptions.NotFound as side effect

I've read a few tutorials on mocking in Python, but I'm still struggling :-/
For example, I have a function wrapping a call to google storage to write a blob.
I'd like to mock the google.storage.Client().bucket(bucket_name) method to return an exceptions.NotFound for a specific non-existent bucket.
I'm using side_effect to set the excepted exception
Do you know what I'm doing wrong?
Below is what I tried (I'm using 2 files: main2.py and main2_test.py):
# main2.py
import logging
from google.cloud import storage
def _write_content(bucket_name, blob_name, content):
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
try:
blob.upload_from_string(data=content)
return True
except Exception:
logging.error("Failed to upload blob")
raise
and
# main2_test.py
import pytest
from unittest.mock import patch
from google.api_core import exceptions
import main2
#patch("main2.storage.Client", autospec=True)
def test_write_content(clientMock):
bucket_name = "not_existent_bucket"
clientMock().bucket(bucket_name).side_effect = exceptions.NotFound
with pytest.raises(exceptions.NotFound):
main2._write_content(bucket_name, "a_blob_name", '{}')
Example call
pytest main2_test.py::test_write_content
Result
platform linux -- Python 3.7.7, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /home/user/project, inifile: pytest.ini
plugins: requests-mock-1.8.0
collected 1 item
main2_test.py::test_write_content FAILED [100%]
============================================================================================== FAILURES ==============================================================================================
_________________________________________________________________________________________ test_write_content _________________________________________________________________________________________
clientMock = <MagicMock name='Client' spec='Client' id='139881522497360'>
#patch("main2.storage.Client", autospec=True)
def test_write_content(clientMock):
bucket_name = "my_bucket"
clientMock().bucket(bucket_name).side_effect = exceptions.NotFound
with pytest.raises(exceptions.NotFound):
> main2._write_content(bucket_name, "a_blob_name", '{}')
E Failed: DID NOT RAISE <class 'google.api_core.exceptions.NotFound'>
main2_test.py:14: Failed
=====================================
FAILED main2_test.py::test_write_content - Failed: DID NOT RAISE <class 'google.api_core.exceptions.NotFound'>
=====================================
Your test has two problems: you are not mocking the method that shall actually raise (upload_from_string), and you are setting an exception class instead of an exception as side effect.
The following would work:
#patch("main2.storage.Client", autospec=True)
def test_write_content(clientMock):
blob_mock = clientMock().bucket.return_value.blob.return_value # split this up for readability
blob_mock.upload_from_string.side_effect = exceptions.NotFound('testing') # the exception is created here
with pytest.raises(exceptions.NotFound):
main2._write_content("not_existent", "a_blob_name", '{}')
Note also that setting a specific parameter for the bucket call has no effect, as it is called on a mock, and the argument is just ignored - I replaced it with return_value, which makes this clearer.

How can I redirect hardcoded calls to open to custom files?

I've written some python code that needs to read a config file at /etc/myapp/config.conf . I want to write a unit test for what happens if that file isn't there, or contains bad values, the usual stuff. Lets say it looks like this...
""" myapp.py
"""
def readconf()
""" Returns string of values read from file
"""
s = ''
with open('/etc/myapp/config.conf', 'r') as f:
s = f.read()
return s
And then I have other code that parses s for its values.
Can I, through some magic Python functionality, make any calls that readconf makes to open redirect to custom locations that I set as part of my test environment?
Example would be:
main.py
def _open_file(path):
with open(path, 'r') as f:
return f.read()
def foo():
return _open_file("/sys/conf")
test.py
from unittest.mock import patch
from main import foo
def test_when_file_not_found():
with patch('main._open_file') as mopen_file:
# Setup mock to raise the error u want
mopen_file.side_effect = FileNotFoundError()
# Run actual function
result = foo()
# Assert if result is expected
assert result == "Sorry, missing file"
Instead of hard-coding the config file, you can externalize it or parameterize it. There are 2 ways to do it:
Environment variables: Use a $CONFIG environment variable that contains the location of the config file. You can run the test with an environment variable that can be set using os.environ['CONFIG'].
CLI params: Initialize the module with commandline params. For tests, you can set sys.argv and let the config property be set by that.
In order to mock just calls to open in your function, while not replacing the call with a helper function, as in Nf4r's answer, you can use a custom patch context manager:
from contextlib import contextmanager
from types import CodeType
#contextmanager
def patch_call(func, call, replacement):
fn_code = func.__code__
try:
func.__code__ = CodeType(
fn_code.co_argcount,
fn_code.co_kwonlyargcount,
fn_code.co_nlocals,
fn_code.co_stacksize,
fn_code.co_flags,
fn_code.co_code,
fn_code.co_consts,
tuple(
replacement if call == name else name
for name in fn_code.co_names
),
fn_code.co_varnames,
fn_code.co_filename,
fn_code.co_name,
fn_code.co_firstlineno,
fn_code.co_lnotab,
fn_code.co_freevars,
fn_code.co_cellvars,
)
yield
finally:
func.__code__ = fn_code
Now you can patch your function:
def patched_open(*args):
raise FileNotFoundError
with patch_call(readconf, "open", "patched_open"):
...
You can use mock to patch a module's instance of the 'open' built-in to redirect to a custom function.
""" myapp.py
"""
def readconf():
s = ''
with open('./config.conf', 'r') as f:
s = f.read()
return s
""" test_myapp.py
"""
import unittest
from unittest import mock
import myapp
def my_open(path, mode):
return open('asdf', mode)
class TestSystem(unittest.TestCase):
#mock.patch('myapp.open', my_open)
def test_config_not_found(self):
try:
result = myapp.readconf()
assert(False)
except FileNotFoundError as e:
assert(True)
if __name__ == '__main__':
unittest.main()
You could also do it with a lambda like this, if you wanted to avoid declaring another function.
#mock.patch('myapp.open', lambda path, mode: open('asdf', mode))
def test_config_not_found(self):
...

Unit test case for file upload flask

I have created a flask application, where I am uploading a file and then predicting the type of the file. I want to write unit test case for the same. I am new to unit test in python and therefore very confused!. There are 2 parts to my code, the first is the Main function, which then calls the classification method.
main.py - here the file is being uploaded and then we call the func_predict function which returns the output
upload_parser = api.parser()
upload_parser.add_argument('file', location='files',
type=FileStorage, required=True)
#api.route('/classification')
#api.expect(upload_parser)
class classification(Resource):
def post(self):
"""
predict the document
"""
args = upload_parser.parse_args()
uploaded_file = args['file']
filename = uploaded_file.filename
prediction,confidence = func_predict(uploaded_file)
return {'file_name':filename,'prediction': prediction,'confidence':confidence}, 201
predict.py : this file contains the func_predict function which does the actual prediction work. It takes the uploaded file as an input
def func_predict(file):
filename = file.filename #filename
extension = os.path.splitext(filename)[1][1:].lower() #file_extension
path = os.path.join(UPLOAD_FOLDER, filename) #store the temporary path of the file
output = {}
try:
# Does some processing.... some lines which are not relevant and then returns the two values
return (''.join(y_pred),max_prob)
Now my confusion is, How do i mock the uploaded file, the uploaded file is of FileStorage type. Also which method should i perform the testing for, should it be the '/classification' or the func_predict.
I have tried the below method, though I did not get any success in this.
I created a test.py file and imported the classification method from main.py and then passed a filename to the data
from flask import Flask, Request
import io
import unittest
from main import classification
class TestFileFail(unittest.TestCase):
def test_1(self):
app = Flask(__name__)
app.debug = True
app.request_class = MyRequest
client = app.test_client()
resp = client.post(
'/classification',
data = {
'file': 'C:\\Users\\aswathi.nambiar\\Desktop\\Desktop docs\\W8_ECI_1.pdf'
}, content_type='multipart/form-data'
)
print(resp.data)
self.assertEqual(
'ok',
resp.data,
)
if __name__ == '__main__':
unittest.main()
I am completely lost! I know there have been earlier questions, but I am not able to figure out .
I have finally stumbled upon how to test it, in case anybody was looking out for something similar.
from predict_main_restplus import func_predict
from werkzeug.datastructures import FileStorage
file = None
def test_classification_correct():
with open('W8-EXP_1.pdf', 'rb') as fp:
file = FileStorage(fp)
a , b = func_predict(file)
assert (a, b) == ('W-8EXP',90.15652760121652)
So, here we are testing the prediction function in predict.py, it returns two values, prediction result and the confidence of the prediction. We can mock the upload using open(file) and then wrapping it with FileStorage. This worked for me.

Resources