I have a node.js application served over https. I would like to call an API from that application. The API is also served over https and it has been generated using the express-generator.
Unfortunately the call never works. There is no error message. The call never reaches the API.
Strangely enough if I try to call another public API (e.g. https://api.publicapis.org/entries') that is working perfectly.
Here is my call:
const requestBody = {
'querystring': searchQuery,
};
const options = {
rejectUnauthorized: false,
keepAlive: false, // switch to true if you're making a lot of calls from this client
};
return new Promise(function (resolve, reject) {
const sslConfiguredAgent = new https.Agent(options);
const requestOptions = {
method: 'POST',
body: JSON.stringify(requestBody),
agent: sslConfiguredAgent,
redirect: 'follow',
};
fetch('https://192.168.112.34:3003/search', requestOptions)
.then(response => response.text())
.then(result => resolve(result))
.catch(error => console.log('error', error));
});
};
And here is the API which I would like to call:
router.post('/', cors(), async function(req, res, next) {
req.body;
queryString = req.body.querystring;
let data = JSON.stringify({
"query": {
"match": {
"phonetic": {
"query": queryString,
"fuzziness": "AUTO",
"operator": "and"
}
}
}
});
const { body } = await client.search({
index: 'phoneticindex',
body: data
});
res.send(body.hits.hits)
});
What is wrong with my API and/or the way I am trying to communicate with it?
UPDATE: I receive the following error in the fetch catch block: 'TypeError: Failed to fetch'
When I create a request in Postman I receive the expected response.
UPDATE 2: This is most probably an SSL related issue. The webapp is expecting an API with a valid certificate. Obviously my API can only have a self signed cert which is not enough here. How can I generate a valid cert for an API which is running on the local network and not publicly available?
UPDATE 3: I managed to make it work by changing the fetch parameters like this:
fetch(url, {
method: 'POST',
headers: {'Content-Type': 'application/json'},
mode: 'cors',
body: raw,
agent: httpsAgent,
redirect: 'follow',
})
and on the API side I added the following headers:
'Content-Type': 'application/json',
'Access-Control-Allow-Origin' : 'https://localhost:2200',
'Access-Control-Allow-Methods' : 'POST',
'Access-Control-Allow-Headers' : 'Content-Type, Authorization'
I also added app.use(cors()) and regenerated the self-signed certificates.
I'm trying to return errors from my lambda functions but for all of the errors it just returns status 502 with message Internal server error. Previously it was just returning cors error for all types of returned errors. After adding 'Access-Control-Allow-Origin' : '*' in api gateway responses, i'm getting 502 error. I've logged thrown errors in catch block & i can see the specific errors in CloudWatch. I've seen this question but that didn't help anyway. Please note that instead of using callback i'm using async await. Also i've tried with & without lambda-proxy integration but the response is same. Do i need to configure something else in case of lambda-proxy?
const test = async (event, context) => {
try {
context.callbackWaitsForEmptyEventLoop = false;
const error = { status: 400, message: 'my custom error' };
throw error;
} catch(error) {
console.log('==== error ====', error);
return createErrorResponse(error.status || 500, error.errors || error.message);
}
createErrorResponse
const createErrorResponse = (statusCode, message, stack={}) => ({
statusCode,
headers: {
'Access-Control-Allow-Origin' : '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify({
error: message
}),
stack: JSON.stringify({ stack })
});
export default createErrorResponse;
serverless.yml
test:
handler: api/index.test
timeout: 360
events:
- http:
path: test-lambda-api
method: post
cors: true
When using Lambda proxy integration with API Gateway, the response from Lambda is expected in a certain format:
var response = {
"statusCode": 200,
"headers": {
"my_header": "my_value"
},
"body": JSON.stringify(responseBody),
"isBase64Encoded": false
};
Include your stack JSON key inside the body itself. You can also refer to this post for more info: https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/
Also, you should enable debug logs for the API Gateway stage which will give a better understanding of what is sent and received from the API GW integration.
I want to make a request from the Coinmarketcap API. They don't allow to use CORS configuration because security matters, the option is execute that in the browser by routing calls through an own backend service.
People said to me use several serverless options with free tier out there, AWS Lambda, Google Cloud Functions, Azure Functions.
But I don't know how to do that, I'm wondering to use Aws Lambda, what I need to do?
That is my node.js code:
const rp = require('request-promise');
const requestOptions = {
method: 'GET',
uri: 'https://pro-api.coinmarketcap.com/v1/cryptocurrency/listings/latest',
qs: {
'start': '1',
'limit': '5000',
'convert': 'USD'
},
headers: {
'X-CMC_PRO_API_KEY': 'b54bcf4d-1bca-4e8e-9a24-22ff2c3d462c' //that isn't my real api key
},
json: true,
gzip: true
};
rp(requestOptions).then(response => {
console.log('API call response:', response);
}).catch((err) => {
console.log('API call error:', err.message);
});
The AWS Lambda function can help you to execute the api and return the response. You will also need probably AWS API Gateway to connect the response from lambda to your api. The code in lambda will look like this:
const rp = require('request-promise');
exports.handler = async (event) => {
const requestOptions = {
method: 'GET',
uri: 'https://pro-api.coinmarketcap.com/v1/cryptocurrency/listings/latest',
qs: {
'start': '1',
'limit': '5000',
'convert': 'USD'
},
headers: {
'X-CMC_PRO_API_KEY': 'b54bcf4d-1bca-4e8e-9a24-22ff2c3d462c' //that isn't my real api key
},
json: true,
gzip: true
};
const response = rp(requestOptions);
console.log(response);
return response;
}
basically pass the input in event object and then you could use like a normal object. Also see - https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html
What I'm Trying to Do
Upload a PDF file from a browser client without exposing any credentials or anything unsavory. Based on this, I thought it could be done, but it doesn't seem to work for me.
The premise is:
you request a pre-signed URL from an S3 Bucket based on a set of parameters supplied to a function that is part of the JavaScript AWS SDK
you supply this URL to the frontend, which can use it to place a file in the S3 Bucket without needing to use any credentials or authentication on the frontend.
GET a Pre-Signed URL From S3
This part is simple and it works for me. I just request a URL from S3 with this little JS nugget:
const s3Params = {
Bucket: uploadBucket,
Key: `${fileId}.pdf`,
ContentType: 'application/pdf',
Expires: 60,
ACL: 'public-read',
}
let uploadUrl = s3.getSignedUrl('putObject', s3Params);
Use the Pre-Signed URL to Upload a File to S3
This is the part that doesn't work, and I can't figure out why.
This little chunk of code basically sends a blob of data to the S3 bucket pre-signed URL using a PUT request.
const result = await fetch(response.data.uploadURL, {
method: 'put',
body: blobData,
});
PUT or POST?
I've found that using any POST requests results in 400 Bad Request, so PUT it is.
What I've Looked At
Content-Type (in my case, it'd be application/pdf, so blobData.type) -- they match between the backend and frontend.
x-amz-acl header
More Content-Type
Similar use case. Looking at this one, it appears that no headers need to be supplied in the PUT request and the signed URL itself is all that is necessary for the file upload.
Something weird that I don't understand. It looks like I may need to pass the length and type of the file to the getSignedUrl call to S3.
Exposing my Bucket to the public (no bueno)
Upload file to s3 with POST
Frontend (fileUploader.js, using Vue):
...
uploadFile: async function(e) {
/* receives file from simple input element -> this.file */
// get signed URL
const response = await axios({
method: 'get',
url: API_GATEWAY_URL
});
console.log('upload file response:', response);
let binary = atob(this.file.split(',')[1]);
let array = [];
for (let i = 0; i < binary.length; i++) {
array.push(binary.charCodeAt(i));
}
let blobData = new Blob([new Uint8Array(array)], {type: 'application/pdf'});
console.log('uploading to:', response.data.uploadURL);
console.log('blob type sanity check:', blobData.type);
const result = await fetch(response.data.uploadURL, {
method: 'put',
headers: {
'Access-Control-Allow-Methods': '*',
'Access-Control-Allow-Origin': '*',
'x-amz-acl': 'public-read',
'Content-Type': blobData.type
},
body: blobData,
});
console.log('PUT result:', result);
this.uploadUrl = response.data.uploadURL.split('?')[0];
}
Backend (fileReceiver.js):
'use strict';
const uuidv4 = require('uuid/v4');
const aws = require('aws-sdk');
const s3 = new aws.S3();
const uploadBucket = 'the-chumiest-bucket';
const fileKeyPrefix = 'path/to/where/the/file/should/live/';
const getUploadUrl = async () => {
const fileId = uuidv4();
const s3Params = {
Bucket: uploadBucket,
Key: `${fileId}.pdf`,
ContentType: 'application/pdf',
Expires: 60,
ACL: 'public-read',
}
return new Promise((resolve, reject) => {
let uploadUrl = s3.getSignedUrl('putObject', s3Params);
resolve({
'statusCode': 200,
'isBase64Encoded': false,
'headers': {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': '*',
'Access-Control-Allow-Credentials': true,
},
'body': JSON.stringify({
'uploadURL': uploadUrl,
'filename': `${fileId}.pdf`
})
});
});
};
exports.handler = async (event, context) => {
console.log('event:', event);
const result = await getUploadUrl();
console.log('result:', result);
return result;
}
Serverless config (serverless.yml):
service: ocr-space-service
provider:
name: aws
region: ca-central-1
stage: ${opt:stage, 'dev'}
timeout: 20
plugins:
- serverless-plugin-existing-s3
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-plugin-include-dependencies
layers:
spaceOcrLayer:
package:
artifact: spaceOcrLayer.zip
allowedAccounts:
- "*"
functions:
fileReceiver:
handler: src/node/fileReceiver.handler
events:
- http:
path: /doc-parser/get-url
method: get
cors: true
startStateMachine:
handler: src/start_state_machine.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
events:
- existingS3:
bucket: ingenio-documents
events:
- s3:ObjectCreated:*
rules:
- prefix:
- suffix: .pdf
startOcrSpaceProcess:
handler: src/start_ocr_space.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
parseOcrSpaceOutput:
handler: src/parse_ocr_space_output.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
renamePdf:
handler: src/rename_pdf.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
parseCorpSearchOutput:
handler: src/node/pdfParser.handler
role:
runtime: nodejs10.x
saveFileToProcessed:
handler: src/node/saveFileToProcessed.handler
role:
runtime: nodejs10.x
stepFunctions:
stateMachines:
ocrSpaceStepFunc:
name: ocrSpaceStepFunc
definition:
StartAt: StartOcrSpaceProcess
States:
StartOcrSpaceProcess:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-startOcrSpaceProcess"
Next: IsDocCorpSearchChoice
Catch:
- ErrorEquals: ["HandledError"]
Next: HandledErrorFallback
IsDocCorpSearchChoice:
Type: Choice
Choices:
- Variable: $.docIsCorpSearch
NumericEquals: 1
Next: ParseCorpSearchOutput
- Variable: $.docIsCorpSearch
NumericEquals: 0
Next: ParseOcrSpaceOutput
ParseCorpSearchOutput:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-parseCorpSearchOutput"
Next: SaveFileToProcessed
Catch:
- ErrorEquals: ["SqsMessageError"]
Next: CorpSearchSqsErrorFallback
- ErrorEquals: ["DownloadFileError"]
Next: CorpSearchDownloadFileErrorFallback
- ErrorEquals: ["HandledError"]
Next: HandledNodeErrorFallback
SaveFileToProcessed:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-saveFileToProcessed"
End: true
ParseOcrSpaceOutput:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-parseOcrSpaceOutput"
Next: RenamePdf
Catch:
- ErrorEquals: ["HandledError"]
Next: HandledErrorFallback
RenamePdf:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-renamePdf"
End: true
Catch:
- ErrorEquals: ["HandledError"]
Next: HandledErrorFallback
- ErrorEquals: ["AccessDeniedException"]
Next: AccessDeniedFallback
AccessDeniedFallback:
Type: Fail
Cause: "Access was denied for copying an S3 object"
HandledErrorFallback:
Type: Fail
Cause: "HandledError occurred"
CorpSearchSqsErrorFallback:
Type: Fail
Cause: "SQS Message send action resulted in error"
CorpSearchDownloadFileErrorFallback:
Type: Fail
Cause: "Downloading file from S3 resulted in error"
HandledNodeErrorFallback:
Type: Fail
Cause: "HandledError occurred"
Error:
403 Forbidden
PUT Response
Response {type: "cors", url: "https://{bucket-name}.s3.{region-id}.amazonaw…nedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read", redirected: false, status: 403, ok: false, …}
body: (...)
bodyUsed: false
headers: Headers {}
ok: false
redirected: false
status: 403
statusText: "Forbidden"
type: "cors"
url: "https://{bucket-name}.s3.{region-id}.amazonaws.com/actionID.pdf?Content-Type=application%2Fpdf&X-Amz-Algorithm=SHA256&X-Amz-Credential=CREDZ-&X-Amz-Date=20190621T192558Z&X-Amz-Expires=900&X-Amz-Security-Token={token}&X-Amz-SignedHeaders=host%3Bx-amz-acl&x-amz-acl=public-read"
proto: Response
What I'm Thinking
I'm thinking the parameters supplied to the getSignedUrl call using the AWS S3 SDK aren't correct, though they follow the structure suggested by AWS' docs (explained here). Aside from that, I'm really lost as to why my request is rejected. I've even tried exposing my Bucket to the public fully and it still didn't work.
Edit
#1:
After reading this, I tried to structure my PUT request like this:
let authFromGet = response.config.headers.Authorization;
const putHeaders = {
'Authorization': authFromGet,
'Content-Type': blobData,
'Expect': '100-continue',
};
...
const result = await fetch(response.data.uploadURL, {
method: 'put',
headers: putHeaders,
body: blobData,
});
this resulted in a 400 Bad Request instead of a 403; different, but still wrong. It's apparent that putting any headers on the request is wrong.
Digging into this, it's because you are trying to upload an object with a public ACL into a bucket that doesn't allow public objects.
Optionally remove the public ACL statement or...
Ensure the bucket set to either
Be publicly viewable or
Ensure no other policy blocks public access (e.g. Do you have an account policy preventing publicly viewable objects but attempting to upload an object with the public ACL?)
Basically, you cannot upload objects with a public ACL into a bucket where there is some restriction preventing that - you'll get the 403 error you describe. HTH.
I am trying to set up a hello world example with AWS lambda and serving it through api gateway. I clicked the "Create a Lambda Function", which set up the api gatway and selected the Blank Function option. I added the lambda function found on AWS gateway getting started guide:
exports.handler = function(event, context, callback) {
callback(null, {"Hello":"World"}); // SUCCESS with message
};
The issue is that when I make a GET request to it, it's returning back a 502 response { "message": "Internal server error" }. And the logs say "Execution failed due to configuration error: Malformed Lambda proxy response".
Usually, when you see Malformed Lambda proxy response, it means your response from your Lambda function doesn't match the format API Gateway is expecting, like this
{
"isBase64Encoded": true|false,
"statusCode": httpStatusCode,
"headers": { "headerName": "headerValue", ... },
"body": "..."
}
If you are not using Lambda proxy integration, you can login to API Gateway console and uncheck the Lambda proxy integration checkbox.
Also, if you are seeing intermittent Malformed Lambda proxy response, it might mean the request to your Lambda function has been throttled by Lambda, and you need to request a concurrent execution limit increase on the Lambda function.
If lambda is used as a proxy then the response format should be
{
"isBase64Encoded": true|false,
"statusCode": httpStatusCode,
"headers": { "headerName": "headerValue", ... },
"body": "..."
}
Note : The body should be stringified
Yeah so I think this is because you're not actually returning a proper http response there which is why you're getting the error.
personally I use a set of functions like so:
module.exports = {
success: (result) => {
return {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin" : "*", // Required for CORS support to work
"Access-Control-Allow-Credentials" : true // Required for cookies, authorization headers with HTTPS
},
body: JSON.stringify(result),
}
},
internalServerError: (msg) => {
return {
statusCode: 500,
headers: {
"Access-Control-Allow-Origin" : "*", // Required for CORS support to work
"Access-Control-Allow-Credentials" : true // Required for cookies, authorization headers with HTTPS
},
body: JSON.stringify({
statusCode: 500,
error: 'Internal Server Error',
internalError: JSON.stringify(msg),
}),
}
}
} // add more responses here.
Then you simply do:
var responder = require('responder')
// some code
callback(null, responder.success({ message: 'hello world'}))
For Python3:
import json
def lambda_handler(event, context):
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
'body': json.dumps({
'success': True
}),
"isBase64Encoded": False
}
Note the body isn't required to be set, it can just be empty:
'body': ''
I had this issue, which originated from an invalid handler code which looks completely fine:
exports.handler = (event, context) => {
return {
isBase64Encoded: false,
body: JSON.stringify({ foo: "bar" }),
headers: {
'Access-Control-Allow-Origin': '*',
},
statusCode: 200,
};
}
I got the hint from examining the somewhat confusing API Gateway response logs:
> Endpoint response body before transformations: null
The way to fix it would be to either
Add the async keyword (async function implicitly returns a Promise):
exports.handler = async (event, context) => {
return {
isBase64Encoded: false,
body: JSON.stringify({ foo: "bar" }),
headers: {
'Access-Control-Allow-Origin': '*',
},
statusCode: 200,
};
}
Return a Promise:
exports.handler = (event, context) => {
return new Promise((resolve) => resolve({
isBase64Encoded: false,
body: JSON.stringify({ foo: "bar" }),
headers: {
'Access-Control-Allow-Origin': '*',
},
statusCode: 200,
}));
}
Use the callback:
exports.handler = (event, context, callback) => {
callback({
isBase64Encoded: false,
body: JSON.stringify({ foo: "bar" }),
headers: {
'Access-Control-Allow-Origin': '*',
},
statusCode: 200,
});
}
My handler was previously declared async without ever using await, so I removed the async keyword to reduce complexity of the code, without realizing that Lambda expects either using async/await/Promise or callback return method.
From the AWS docs
In a Lambda function in Node.js, To return a successful response, call
callback(null, {"statusCode": 200, "body": "results"}). To throw an
exception, call callback(new Error('internal server error')). For a
client-side error, e.g., a required parameter is missing, you can call
callback(null, {"statusCode": 400, "body": "Missing parameters of
..."}) to return the error without throwing an exception.
Just a piece of code for .net core and C# :
using Amazon.Lambda.APIGatewayEvents;
...
var response = new APIGatewayProxyResponse
{
StatusCode = (int)HttpStatusCode.OK,
Body = JsonConvert.SerializeObject(new { msg = "Welcome to Belarus! :)" }),
Headers = new Dictionary<string, string> { { "Content-Type", "application/json" } }
};
return response;
Response from lambda will be :
{"statusCode":200,"headers":{"Content-Type":"application/json"},"multiValueHeaders":null,"body":"{\"msg\":\"Welcome to Belarus! :)\"}","isBase64Encoded":false}
Response from api gateway will be :
{"msg":"Welcome to Belarus! :)"}
I've tried all of above suggestion but it doesn't work while body value is not String
return {
statusCode: 200,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
body: JSON.stringify({
success: true
}),
isBase64Encoded: false
};
A very very special case, if you pass the headers directly there is a chance you have this header:
"set-cookie": [ "........" ]
But Amazon needs this:
"set-cookie": "[ \\"........\\" ]"
For anyone else who struggles when the response appears valid. This does not work:
callback(null,JSON.stringify( {
isBase64Encoded: false,
statusCode: 200,
headers: { 'headerName': 'headerValue' },
body: 'hello world'
})
but this does:
callback(null,JSON.stringify( {
'isBase64Encoded': false,
'statusCode': 200,
'headers': { 'headerName': 'headerValue' },
'body': 'hello world'
})
Also, it appears that no extra keys are allowed to be present on the response object.
If you're using Go with https://github.com/aws/aws-lambda-go, you have to use events.APIGatewayProxyResponse.
func hello(ctx context.Context, event ImageEditorEvent) (events.APIGatewayProxyResponse, error) {
return events.APIGatewayProxyResponse{
IsBase64Encoded: false,
StatusCode: 200,
Headers: headers,
Body: body,
}, nil
}
I had this error because I accidentally removed the variable ServerlessExpressLambdaFunctionName from the CloudFormation AWS::Serverless::Api resource. The context here is https://github.com/awslabs/aws-serverless-express "Run serverless applications and REST APIs using your existing Node.js application framework, on top of AWS Lambda and Amazon API Gateway"
Most likely your returning body is in JSON format, but only STRING format is allowed for Lambda proxy integration with API Gateway.
So wrap your old response body with JSON.stringify().
In case the above doesn't work for anyone, I ran into this error despite setting the response variable correctly.
I was making a call to an RDS database in my function. It turned out that what was causing the problem was the security group rules (inbound) on that database.
You'll probably want to restrict the IP addresses that can access the API, but if you want to get it working quick / dirty to test out if that change fixes it you can set it to accept all like so (you can also set the range on the ports to accept all ports too, but I didn't do that in this example):
A common cause of the "Malformed Lambda proxy response" error is headers that are not {String: String, ...} key/values pairs.
Since set-cookie headers can and do appear in multiples, they are represented
in http.request.callback.response as the set-cookie key having an Array of
Strings value instead of a single String. While this works for developers, AWS
API Gateway doesn't understand it and throws a "Malformed Lambda proxy response"
error.
My solution is to do something like this:
function createHeaders(headers) {
const singleValueHeaders = {}
const multiValueHeaders = {}
Object.entries(headers).forEach(([key, value]) => {
const targetHeaders = Array.isArray(value) ? multiValueHeaders : singleValueHeaders
Object.assign(targetHeaders, { [key]: value })
})
return {
headers: singleValueHeaders,
multiValueHeaders,
}
}
var output = {
...{
"statusCode": response.statusCode,
"body": responseString
},
...createHeaders(response.headers)
}
Note that the ... above does not mean Yada Yada Yada. It's the ES6 spread operator.
Here's another approach. Configure the mapping template in your API gateway integration request and response. Go to IntegrationRequest -> MappingTemplate -> select "When there are no templates defined" -> type application/json for content-type. Then you don't have to explicitly send a json. Even the response you get at your client can be a plain string.
The format of your function response is the source of this error. For API Gateway to handle a Lambda function's response, the response must be JSON in this format:
{
"isBase64Encoded": true|false,
"statusCode": httpStatusCode,
"headers": { "headerName": "headerValue", ... },
"body": "..."
}
Here's an example function in Node.js with the response correctly formatted:
exports.handler = (event, context, callback) => {
var responseBody = {
"key3": "value3",
"key2": "value2",
"key1": "value1"
};
var response = {
"statusCode": 200,
"headers": {
"my_header": "my_value"
},
"body": JSON.stringify(responseBody),
"isBase64Encoded": false
};
callback(null, response);
};
Ref: https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/
Python 3.7
Before
{
"isBase64Encoded": False,
"statusCode": response.status_code,
"headers": {
"Content-Type": "application/json",
},
"body": response.json()
}
After
{
"isBase64Encoded": False,
"statusCode": response.status_code,
"headers": {
"Content-Type": "application/json",
},
"body": str(response.json()) //body must be of string type
}
If you're just new to AWS and just want your URL working,
If you haven't created a trigger for your Lambda Function, navigate to the function in Lambda Functions app and create trigger choosing API Gateway.
Navigate to API Gateway App -> Choose your Particular Lambda's API Gateway (Method execution) -> Click on INTEGRATION Request -> Uncheck "Use Lambda Proxy integration" (check box).
Then click on "<-Method Execution" & click on Test Client section. Provide the options and click test button. You should see a success response.
If you are still unable to get a success response, create an alias for the correct version (if you have multiple versions in the Lambda Function)
Pick the URL from the logs and use your POST/GET Tool (Postman) and choose authentication as AWS Signature - provide your authentication keys(AccessKey & SecretKey) in the postman request with AWS Region & Service Name as lambda.
P.S : This may only help beginners and may be irrelevant to others.