How to write unit_test for a function which only updates a global dict type variable - Python(pytest) - python-3.x

Unit-Test code for a function which does a validation operation and updates the Global dict - result_count = {'test_method':{'Total_tested': 0, 'passed': 0, 'failed': 0}
below is the function
def validate_response(testmethod, response, expected_data):
ra = response.json()
expected = expected_data['payload']
if (response.status_code == expected['response_status']) \
and (result_count[testmethod['folder']]["FAILED"] < 10):
------code logic - checks using jsondiff and re.expressions--
else:
missmatch = 'response status code missmatch'
update_result(testmethod, 'status_code', expected_data, response, missmatch, fail=True)
result_count[testmethod['folder']]['FAILED'] += 1
Need to write test for the above function. to check if the result_count is updated properly.
Regular expression library & jsondiff is also used in the mentioned function.
help required for mocking the global variable and using the same for testing.
During executing the test script below i was getting a key error for the global variable result_count. That implies that the code is unable to access the result_count.. the key error after the update - is throwing TypeError: '<=' not supported between instances of 'MagicMock' and 'int' as Error
The current issue is that the result_count is not updated when the below line code is executed.
partner_test.validate_response(test_input_mocker.method, response, expected_data)
my unit test script is as below
#patch("tests.p_test.result")
#patch("tests.p_test.result_count")
def test_validate_response_pass(result_count_mocker, monkeypatch, result_mocker, test_input_mocker):
# Build data for validate response function
response = Resp(200, {'message': 'pong'})
response_data = response.json()
expected_data = {some_test_data}
# global variable import and initialize result, result_count
from p_test import result, result_count
result_count.update(result_count_mocker.data)
result.update(result_mocker.data)
result_count_mocker.return_value = result_count_mocker.data
def update_result_mocker(*args):
mock operations here
return None
monkeypatch.setattr(partner_test, "update_result", update_result_mocker)
p_test.validate_response(test_input_mocker.method, response, expected_data)
Resp() in the test fnction is a response class created to mock the response object.

the issue can be resolved using #patch.dict(dict_name, values) below is the solution
#patch.dict(p_test.result, {dict_values })
#patch.dict(p_test.result_count, {dict_values})
def test_validate_response(monkeypatch, test_input_mocker, result_count_test):
response = Resp(200, {'message': 'pong'})
response_data = response.json()
expected_data = {----your data-----}
def update_result_mocker(*args):
# mock_operations
monkeypatch.setattr(p_test, "update_result", update_result_mocker)
p_test.validate_response(test_input_mocker.method, response, expected_data)
print(p_test.result_count)

Related

Google cloud function (python) does not deploy - Function failed on loading user code

I'm calling a simple python function in google cloud but cannot get it to save. It shows this error:
"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."
Logs don't seem to show much that would indicate error in the code. I followed this guide: https://blog.thereportapi.com/automate-a-daily-etl-of-currency-rates-into-bigquery/
With the only difference environment variables and the endpoint I'm using.
Code is below, which is just a get request followed by a push of data into a table.
import requests
import json
import time;
import os;
from google.cloud import bigquery
# Set any default values for these variables if they are not found from Environment variables
PROJECT_ID = os.environ.get("PROJECT_ID", "xxxxxxxxxxxxxx")
EXCHANGERATESAPI_KEY = os.environ.get("EXCHANGERATESAPI_KEY", "xxxxxxxxxxxxxxx")
REGIONAL_ENDPOINT = os.environ.get("REGIONAL_ENDPOINT", "europe-west1")
DATASET_ID = os.environ.get("DATASET_ID", "currency_rates")
TABLE_NAME = os.environ.get("TABLE_NAME", "currency_rates")
BASE_CURRENCY = os.environ.get("BASE_CURRENCY", "SEK")
SYMBOLS = os.environ.get("SYMBOLS", "NOK,EUR,USD,GBP")
def hello_world(request):
latest_response = get_latest_currency_rates();
write_to_bq(latest_response)
return "Success"
def get_latest_currency_rates():
PARAMS={'access_key': EXCHANGERATESAPI_KEY , 'symbols': SYMBOLS, 'base': BASE_CURRENCY}
response = requests.get("https://api.exchangeratesapi.io/v1/latest", params=PARAMS)
print(response.json())
return response.json()
def write_to_bq(response):
# Instantiates a client
bigquery_client = bigquery.Client(project=PROJECT_ID)
# Prepares a reference to the dataset
dataset_ref = bigquery_client.dataset(DATASET_ID)
table_ref = dataset_ref.table(TABLE_NAME)
table = bigquery_client.get_table(table_ref)
# get the current timestamp so we know how fresh the data is
timestamp = time.time()
jsondump = json.dumps(response) #Returns a string
# Ensure the Response is a String not JSON
rows_to_insert = [{"timestamp":timestamp,"data":jsondump}]
errors = bigquery_client.insert_rows(table, rows_to_insert) # API request
print(errors)
assert errors == []
I tried just the part that does the get request with an offline editor and I can confirm a response works fine. I suspect it might have to do something with permissions or the way the script tries to access the database.

TypeError: process_session() missing 1 required positional argument: 'session'

I am facing this error I tried everything, I deployed this function as a google cloud function, but when I am running the triggering URL I am getting an error
Request cannot be handled
Logs are
TypeError: process_session() missing 1 required positional argument: 'session'
at call_user_function (/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py:261)
at invoke_user_function (/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py:268)
at run_http_function (/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py:402)
Function code
def process_session(self, session, utc_offset=0):
s = {}
try:
edfbyte, analysis = process_session(session, utc_offset)
report_json, quality = process_analysis(analysis, session.ref.id)
# save the EDF
path = 'Users/' + session.ref.get("uid") + '/session-' + session.ref.get("sessionId") + '.edf'
path_report = 'Users/' + session.ref.get("uid") + '/session-' + session.ref.get("sessionId") + '.json'
bucket = storage.bucket("......")
bucket.blob(path).upload_from_string(edfbyte, content_type='application/octet-stream')
bucket.blob(path_report).upload_from_string(report_json, content_type='application/json')
# update session
s = session.to_dict()
s[u'macid'] = analysis['header']['macid']
s[u'quality'] = quality
s[u'edfPath'] = path
s[u'reportPath'] = path_report
s[u'timestamp'] = dateutil.parser.parse(analysis['header']['startdate'])
self.db.collection(u'ProcessedSessions').document(session.ref.id).set(s)
try:
self.db.collection(u'UnprocessedSessions').document(session.ref.id).delete()
#session.ref.reference.delete()
except:
pass
return True, 0
except Exception as e:
traceback.print_exc()
s = session.to_dict()
if u'attempt' in s:
attempt = s['attempt']
else:
attempt = 0
self.db.collection(u'UnprocessedSessions').document(session.ref.id).set({u'attempt': attempt + 1}, merge=True)
return False, attempt + 1
You are trying to pass 3 parameters in a HTTP Trigger, Python only accepts 1 parameter wich is a Flask Request Object.
HTTP Functions
Your function is passed a single parameter, (request), which is a
Flask Request object. Return any value from your function that can be
handled by the Flask make_response method. The result will be the HTTP
response.
You can try this to invoke your function:
def functionx (request):
# the GCF always receive only 1 paramater an HTTP request object (flask request)
# you need to get the parameter from the request
# after that you can call your method
process_session(param1,param2,param3)
def process_session(self, session, utc_offset=0):
#dosomenthing
print ("some code here")
Or if you need to use more parameters for your entry point, you can use a Background function.
Background Functions
Background functions are passed arguments holding
data associated with the event that triggered the function's
execution. In the Python runtime, your function is passed the
arguments (data, context)
Is this all part of def process_session? I can't see indentation.
Is it inside a class-object?
Maybe try in line 4:
edfbyte, analysis = self.process_session(session, utc_offset)
The given error signals that you haven't referenced the class-object.

How to set mocked exception behavior on Python?

I am using an external library (github3.py) that defines an internal exception (github3.exceptions.UnprocessableEntity). It doesn't matter how this exception is defined, so I want to create a side effect and set the attributes I use from this exception.
Tested code not-so-minimal example:
import github3
class GithubService:
def __init__(self, token: str) -> None:
self.connection = github3.login(token=token)
self.repos = self.connection.repositories()
def create_pull(self, repo_name: str) -> str:
for repo in self.repos:
if repo.full_name == repo_name:
break
try:
created_pr = repo.create_pull(
title="title",
body="body",
head="head",
base="base",
)
except github3.exceptions.UnprocessableEntity as github_exception:
extra = ""
for error in github_exception.errors:
if "message" in error:
extra += f"{error['message']} "
else:
extra += f"Invalid field {error['field']}. " # testing this case
return f"{repo_name}: {github_exception.msg}. {extra}"
I need to set the attributes msg and also errors from the exception. So I tried in my test code using pytest-mock:
#pytest.fixture
def mock_github3_login(mocker: MockerFixture) -> MockerFixture:
"""Fixture for mocking github3.login."""
mock = mocker.patch("github3.login", autospec=True)
mock.return_value.repositories.return_value = [
mocker.Mock(full_name="staticdev/nope"),
mocker.Mock(full_name="staticdev/omg"),
]
return mock
def test_create_pull_invalid_field(
mocker: MockerFixture, mock_github3_login: MockerFixture,
) -> None:
exception_mock = mocker.Mock(errors=[{"field": "head"}], msg="Validation Failed")
mock_github3_login.return_value.repositories.return_value[1].create_pull.side_effect = github3.exceptions.UnprocessableEntity(mocker.Mock())
mock_github3_login.return_value.repositories.return_value[1].create_pull.return_value = exception_mock
response = GithubService("faketoken").create_pull("staticdev/omg")
assert response == "staticdev/omg: Validation Failed. Invalid field head."
The problem with this code is that, if you have side_effect and return_value, Python just ignores the return_value.
The problem here is that I don't want to know the implementation of UnprocessableEntity to call it passing the right arguments to it's constructor. Also, I didn't find other way using just side_effect. I also tried to using return value and setting the class of the mock and using it this way:
def test_create_pull_invalid_field(
mock_github3_login: MockerFixture,
) -> None:
exception_mock = Mock(__class__ = github3.exceptions.UnprocessableEntity, errors=[{"field": "head"}], msg="Validation Failed")
mock_github3_login.return_value.repositories.return_value[1].create_pull.return_value = exception_mock
response = GithubService("faketoken").create_pull("staticdev/omg")
assert response == "staticdev/omg: Validation Failed. Invalid field head."
This also does not work, the exception is not thrown. So I don't know how to overcome this issue given the constraint I don't want to see the implementation of UnprocessableEntity. Any ideas here?
So based on your example, you don't really need to mock github3.exceptions.UnprocessableEntity but only the incoming resp argument.
So the following test should work:
def test_create_pull_invalid_field(
mocker: MockerFixture, mock_github3_login: MockerFixture,
) -> None:
mocked_response = mocker.Mock()
mocked_response.json.return_value = {
"message": "Validation Failed", "errors": [{"field": "head"}]
}
repo = mock_github3_login.return_value.repositories.return_value[1]
repo.create_pull.side_effect = github3.exceptions.UnprocessableEntity(mocked_response)
response = GithubService("faketoken").create_pull("staticdev/omg")
assert response == "staticdev/omg: Validation Failed. Invalid field head."
EDIT:
If you want github3.exceptions.UnprocessableEntity to be completely abstracted, it won't be possible to mock the entire class as catching classes that do not inherit from BaseException is not allowed (See docs). But you can get around it by mocking the constructor only:
def test_create_pull_invalid_field(
mocker: MockerFixture, mock_github3_login: MockerFixture,
) -> None:
def _initiate_mocked_exception(self) -> None:
self.errors = [{"field": "head"}]
self.msg = "Validation Failed"
mocker.patch.object(
github3.exceptions.UnprocessableEntity, "__init__",
_initiate_mocked_exception
)
repo = mock_github3_login.return_value.repositories.return_value[1]
repo.create_pull.side_effect = github3.exceptions.UnprocessableEntity
response = GithubService("faketoken").create_pull("staticdev/omg")
assert response == "staticdev/omg: Validation Failed. Invalid field head."

Getting Http Response from boto3 table.batch_writer object

There is a list of data in a csv that I want to put into a dynamodb table on aws. See sample list below.
Mary,F,7065
Anna,F,2604
Emma,F,2003
Elizabeth,F,1939
Minnie,F,1746
Margaret,F,1578
Ida,F,1472
Alice,F,1414
Bertha,F,1320
Sarah,F,1288
Annie,F,1258
Clara,F,1226
Ella,F,1156
Florence,F,1063
Cora,F,1045
Martha,F,1040
Laura,F,1012
Nellie,F,995
Grace,F,982
Carrie,F,949
Maude,F,858
Mabel,F,808
Bessie,F,796
Jennie,F,793
Gertrude,F,787
Julia,F,783
Hattie,F,769
Edith,F,768
Mattie,F,704
Rose,F,700
Catherine,F,688
Lillian,F,672
Ada,F,652
Lillie,F,647
Helen,F,636
Jessie,F,635
Louise,F,635
Ethel,F,633
Lula,F,621
Myrtle,F,615
Eva,F,614
Frances,F,605
Lena,F,603
Lucy,F,590
Edna,F,588
Maggie,F,582
Pearl,F,569
Daisy,F,564
Fannie,F,560
Josephine,F,544
In order to write more than 25 items to a dynamodb table, the documents use a batch_writer object.
resource = boto3.resource('dynamodb')
table = resource.Table('Names')
with table.batch_writer() as batch:
for item in items:
batch.put_item(item)
Is there a way to return an http response to indicate a successful completion of the batch_write? I know that it is asyncronous. Is there a wait or fetch or something to call?
The documents for the BatchWriter object instantiated by batch_writer are located (<3 Open Source) here. Looking at the BatchWriter class, the _flush method generates a response, it just doesn't store it anywhere.
class BatchWriter(object):
"""Automatically handle batch writes to DynamoDB for a single table."""
def __init__(self, table_name, client, flush_amount=25,
overwrite_by_pkeys=None):
"""
:type table_name: str
:param table_name: The name of the table. The class handles
batch writes to a single table.
:type client: ``botocore.client.Client``
:param client: A botocore client. Note this client
**must** have the dynamodb customizations applied
to it for transforming AttributeValues into the
wire protocol. What this means in practice is that
you need to use a client that comes from a DynamoDB
resource if you're going to instantiate this class
directly, i.e
``boto3.resource('dynamodb').Table('foo').meta.client``.
:type flush_amount: int
:param flush_amount: The number of items to keep in
a local buffer before sending a batch_write_item
request to DynamoDB.
:type overwrite_by_pkeys: list(string)
:param overwrite_by_pkeys: De-duplicate request items in buffer
if match new request item on specified primary keys. i.e
``["partition_key1", "sort_key2", "sort_key3"]``
"""
self._table_name = table_name
self._client = client
self._items_buffer = []
self._flush_amount = flush_amount
self._overwrite_by_pkeys = overwrite_by_pkeys
def put_item(self, Item):
self._add_request_and_process({'PutRequest': {'Item': Item}})
def delete_item(self, Key):
self._add_request_and_process({'DeleteRequest': {'Key': Key}})
def _add_request_and_process(self, request):
if self._overwrite_by_pkeys:
self._remove_dup_pkeys_request_if_any(request)
self._items_buffer.append(request)
self._flush_if_needed()
def _remove_dup_pkeys_request_if_any(self, request):
pkey_values_new = self._extract_pkey_values(request)
for item in self._items_buffer:
if self._extract_pkey_values(item) == pkey_values_new:
self._items_buffer.remove(item)
logger.debug("With overwrite_by_pkeys enabled, skipping "
"request:%s", item)
def _extract_pkey_values(self, request):
if request.get('PutRequest'):
return [request['PutRequest']['Item'][key]
for key in self._overwrite_by_pkeys]
elif request.get('DeleteRequest'):
return [request['DeleteRequest']['Key'][key]
for key in self._overwrite_by_pkeys]
return None
def _flush_if_needed(self):
if len(self._items_buffer) >= self._flush_amount:
self._flush()
def _flush(self):
items_to_send = self._items_buffer[:self._flush_amount]
self._items_buffer = self._items_buffer[self._flush_amount:]
response = self._client.batch_write_item(
RequestItems={self._table_name: items_to_send})
unprocessed_items = response['UnprocessedItems']
if unprocessed_items and unprocessed_items[self._table_name]:
# Any unprocessed_items are immediately added to the
# next batch we send.
self._items_buffer.extend(unprocessed_items[self._table_name])
else:
self._items_buffer = []
logger.debug("Batch write sent %s, unprocessed: %s",
len(items_to_send), len(self._items_buffer))
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, tb):
# When we exit, we need to keep flushing whatever's left
# until there's nothing left in our items buffer.
while self._items_buffer:
self._flush()
How I solved it:
I built on the responses to this question about overwriting class methods. They all work, but the best for my use case was to overwrite the class instance with this version of _flush.
First I built a new version of _flush.
import logging
import types
## New Flush
def _flush(self):
items_to_send = self._items_buffer[:self._flush_amount]
self._items_buffer = self._items_buffer[self._flush_amount:]
self._response = self._client.batch_write_item(
RequestItems={self._table_name: items_to_send})
unprocessed_items = self._response['UnprocessedItems']
if unprocessed_items and unprocessed_items[self._table_name]:
# Any unprocessed_items are immediately added to the
# next batch we send.
self._items_buffer.extend(unprocessed_items[self._table_name])
else:
self._items_buffer = []
logger.debug("Batch write sent %s, unprocessed: %s",
len(items_to_send), len(self._items_buffer))
Then I overwrote the instance method like this.
with batch_writer() as batch:
batch._flush=types.MethodType(_flush, batch)
for item in items:
batch.put_item(Item=item)
print(batch._response)
And this generates an output like this.
{'UnprocessedItems': {},
'ResponseMetadata': {'RequestId': '853HSV0ULO4BN71R6T895J991VVV4KQNSO5AEMVJF66Q9ASUAAJ',
'HTTPStatusCode': 200,
'HTTPHeaders': {'server': 'Server',
'date': 'Fri, 29 Mar 2019 18:29:49 GMT',
'content-type': 'application/x-amz-json-1.0',
'content-length': '23',
'connection': 'keep-alive',
'x-amzn-requestid': '853HSV0ULO4BN71R6T895J991VVV4KQNSO5AEMVJF66Q9ASUAAJ',
'x-amz-crc32': '4185382645'},
'RetryAttempts': 0}}
There doesn't appear to be any built-in way to do this. The _flush method on BatchWriter does log a debug message when it finishes a batch, though. If you just want to see what's happening, you could enable debug logging before your put_item loop:
import logging
logger = logging.getLogger('boto3.dynamodb.table')
logger.setLevel(logging.DEBUG)
If you want to take some action instead you could create a custom logging.Handler, something like this:
import logging
import sys
class CatchBatchWrites(logging.Handler):
def handle(self, record):
if record.msg.startswith('Batch write sent'):
processed, unprocessed = record.args
# do something with these numbers
logger = logging.getLogger('boto3.dynamodb.table')
logger.setLevel(logging.DEBUG) # still necessary
logger.addHandler(CatchBatchWrites())

Lambda/boto3/python loop

This code acts as an early warning system for ADFS failures, which works fine when run locally. Problem is that when I run it in Lambda, it loops non stop.
In short:
lambda_handler() runs pagecheck()
pagecheck() produces the info needed then passes 2 lists (msgdet_list, error_list) and an int (error_count) to notification().
notification() collates and prints the output. The output is two key variables (notificationheader and notificationbody).
I've #commentedOut the SNS piece which would usually email the info, and am using print() to instead send the info to CloudWatch logs until I can get the loop sorted. Logs:
CloudWatch logs
If I run this locally, it produces a clean single output. In Lambda, the function will loop until it times out. It's almost like every time the lists are updated, they're passed to the notification() module and it's run. I can limit the function time, but would rather fix the code!
Cheers,
tac
# This python/boto3/lambda script sends a request to an Office 365 landing page, parses return details to confirm a successful redirect to /
# the organisation ADFS homepage, authenticates homepge is correct, raises any errors, and sends a consolodated report to /
# an AWS SNS topic.
# Run once to produce pageserver and htmlchar values for global variables.
# Import required modules
import boto3
import urllib.request
from urllib.request import Request, urlopen
from datetime import datetime
import time
import re
import sys
# Global variables to be set
url = "https://outlook.com/CONTOSSO.com"
adfslink = "https://sts.CONTOSSO.com/adfs/ls/?client-request-id="
# Input after first run
pageserver = "Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0"
htmlchar = 18600
# Input AWS SNS ARN
snsarn = 'arn:aws:sns:ap-southeast-2:XXXXXXXXXXXXX:Daily_Check_Notifications_CONTOSSO'
sns = boto3.client('sns')
def pagecheck():
# Present the request to the webpage as if coming from a user in a browser
user_agent = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)'
values = {'name' : 'user'}
headers = { 'User-Agent' : user_agent }
data = urllib.parse.urlencode(values)
data = data.encode('ascii')
# "Null" the Message Detail and Error lists
msgdet_list = []
error_list = []
request = Request(url)
req = urllib.request.Request(url, data, headers)
response = urlopen(request)
with urllib.request.urlopen(request) as response:
# Get the URL. This gets the real URL.
acturl = response.geturl()
msgdet_list.append("\nThe Actual URL is:")
msgdet_list.append(str(acturl))
if adfslink not in acturl:
error_list.append(str("Redirect Fail"))
# Get the HTTP resonse code
httpcode = response.code
msgdet_list.append("\nThe HTTP code is: ")
msgdet_list.append(str(httpcode))
if httpcode//200 != 1:
error_list.append(str("No HTTP 2XX Code"))
# Get the Headers as a dictionary-like object
headers = response.info()
msgdet_list.append("\nThe Headers are:")
msgdet_list.append(str(headers))
if response.info() == "":
error_list.append(str("Header Error"))
# Get the date of request and compare to UTC (DD MMM YYYY HH MM)
date = response.info()['date']
msgdet_list.append("The Date is: ")
msgdet_list.append(str(date))
returndate = str(date.split( )[1:5])
returndate = re.sub(r'[^\w\s]','',returndate)
returndate = returndate[:-2]
currentdate = datetime.utcnow()
currentdate = currentdate.strftime("%d %b %Y %H%M")
if returndate != currentdate:
date_error = ("Date Error. Returned Date: ", returndate, "Expected Date: ", currentdate, "Times in UTC (DD MMM YYYY HH MM)")
date_error = str(date_error)
date_error = re.sub(r'[^\w\s]','',date_error)
error_list.append(str(date_error))
# Get the server
headerserver = response.info()['server']
msgdet_list.append("\nThe Server is: ")
msgdet_list.append(str(headerserver))
if pageserver not in headerserver:
error_list.append(str("Server Error"))
# Get all HTML data and confirm no major change to content size by character lenth (global var: htmlchar).
html = response.read()
htmllength = len(html)
msgdet_list.append("\nHTML Length is: ")
msgdet_list.append(str(htmllength))
msgdet_list.append("\nThe Full HTML is: ")
msgdet_list.append(str(html))
msgdet_list.append("\n")
if htmllength // htmlchar != 1:
error_list.append(str("Page HTML Error - incorrect # of characters"))
if adfslink not in str(acturl):
error_list.append(str("ADFS Link Error"))
error_list.append("\n")
error_count = len(error_list)
if error_count == 1:
error_list.insert(0, 'No Errors Found.')
elif error_count == 2:
error_list.insert(0, 'Error Found:')
else:
error_list.insert(0, 'Multiple Errors Found:')
# Pass completed results and data to the notification() module
notification(msgdet_list, error_list, error_count)
# Use AWS SNS to create a notification email with the additional data generated
def notification(msgdet_list, error_list, errors):
datacheck = str("\n".join(msgdet_list))
errorcheck = str("\n".join(error_list))
notificationbody = str(errorcheck + datacheck)
if errors >1:
result = 'FAILED!'
else:
result = 'passed.'
notificationheader = ('The daily ADFS check has been marked as ' + result + ' ' + str(errors) + ' ' + str(error_list))
if result != 'passed.':
# message = sns.publish(
# TopicArn = snsarn,
# Subject = notificationheader,
# Message = notificationbody
# )
# Output result to CloudWatch logstream
print('Response: ' + notificationheader)
else:
print('passed')
sys.exit()
# Trigger the Lambda handler
def lambda_handler(event, context):
aws_account_ids = [context.invoked_function_arn.split(":")[4]]
pagecheck()
return "Successful"
sys.exit()
Your CloudWatch logs contain the following error message:
Process exited before completing request
This is caused by invoking sys.exit() in your code. Locally your Python interpreter will just terminate when encountering such a sys.exit().
AWS Lambda on the other hand expects a Python function to just return and handles sys.exit() as an error. As your function probably got invoked asynchronously AWS Lambda retries to execute it twice.
To solve your problem, you can replace the occurences of sys.exit() with return or even better, just remove the sys.exit() calls, as there would be already implicit returns in the places where you use sys.exit().

Resources