sonarqube and aws lambda - python-3.x

I recently enabled sonarqube for my lambda functions.
Now as we all know, for any lambda_handler this is the standard process. However all the logics are stated and based on event not much on context.
def lambda_handler(event, context):
Now after running the sonarqube scan, I'm getting:
Remove the unused function parameter "context".
I'm getting MAJOR issue from the sonarqube for all my lambdas , any suggestion to fix this issue?
My lambdas are Python based.

Related

Check if AWS Lambda is under execution right now using boto3

I have a requirement that I need to check if a specific lambda is currently under execution or not using another lambda.
I went through boto3 documentation and found get_function() method returns State of the function, But this doesn't return function state, It gives State that if Lambda creation is Pending/Inactive/Active, etc.
Is there a way I can find out if the function state of lambda using another lambda?
Any help or guidance is appreciated.
Documentation Link
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.get_function
You can use CloudWatch metrics of your lambda function to determine whether it's currently being invoked. https://docs.aws.amazon.com/lambda/latest/dg/monitoring-metrics.html#monitoring-metrics-types

[AWS][Amplify] Invoke function locally crashs with no error

I have just joined a developpment team, and the project should run in the cloud using amplify. I have a function called usershandler that i want to run locally. For that, i used :
amplify invoke function usershandler
This is the output i get :
Starting execution...
EVENT: {"httpMethod":"GET","body":"{\"name\": \"Amplify\"}","path":"/users","resource":"/{proxy+}","queryStringParameters":{}}
App started
get All VSM called
Connection to database was a success
null
Result:
{"statusCode":200,"body":"{\"success\":true,\"results\":[]}","headers":{"x-powered-by":"Express","access-control-allow-origin":"*","access-control-allow-headers":"Origin, X-Requested-With, Content-Type, Accept","content-type":"application/json; charset=utf-8","content-length":"29","etag":"W/\"1d-4wD7ChrrlHssGyekznKfKxR7ImE\"","date":"Tue, 21 Jul 2020 12:32:36 GMT","connection":"close"},"isBase64Encoded":false}
Finished execution.
EDIT : Also, when running the invoke command, amplify asks me for a src/event.json while i've seen it looking for the index.js for some ??
EDIT 2 [SOLVED] : downgrading #aws-amplify/cli to 4.14.1 seems to solve this :)
Expected behavior : The server should continue running so i can use it ..
Actual behavior : It always stops after the finished execution message.
The connection to the db works fine, the config.json contains correct values. Don't know why it is acting like this. Have anybody had the same problem?
Have a nice day.
Short answer: You are running the invoke command which is doing just what it is supposed to be doing - invoking the lambda function.
If you are looking to get a local API up, then run the following command:
sam local start-api
This will read your template and based on the endpoints you have setup, run them locally essentially mocking API Gateway locally. Read more about it in the official docs here.
Explanation:
This command comes is one of offering of AWS Serverless Application Model (AWS SAM). A tool to develop serverless application. It is essentially an abstraction of AWS Cloufdformation. Similarly Amplify is an abstraction that makes it simple to not only develop and manage the backend but also brings that power to frontend.
As both of them essentially use Cloudformation templates underneeth, you can leverage the capabilities of one tool with another.
SAM provides a robust set of tools for local development invcluding running a local lambda mocking server, in case you are not using API Gateway.
I use this combination to develop and test my frontend along with backend which is in golang, a language which is not as mature as javascript as a backend language with Amplify as of now.

Timeout when writing custom metric data to CloudWatch with AWS lambda

I'm running a vanilla AWS lambda function to count the number of messages in my RabbitMQ task queue:
import boto3
from botocore.vendored import requests
cloudwatch_client = boto3.client('cloudwatch')
def get_queue_count(user="user", password="password", domain="<my domain>/api/queues"):
url = f"https://{user}:{password}#{domain}"
res = requests.get(url)
message_count = 0
for queue in res.json():
message_count += queue["messages"]
return message_count
def lambda_handler(event, context):
metric_data = [{'MetricName': 'RabbitMQQueueLength', "Unit": "None", 'Value': get_queue_count()}]
print(metric_data)
response = cloudwatch_client.put_metric_data(MetricData=metric_data, Namespace="RabbitMQ")
print(response)
Which returns the following output on a test run:
Response:
{
"errorMessage": "2020-06-30T19:50:50.175Z d3945a14-82e5-42e5-b03d-3fc07d5c5148 Task timed out after 15.02 seconds"
}
Request ID:
"d3945a14-82e5-42e5-b03d-3fc07d5c5148"
Function logs:
START RequestId: d3945a14-82e5-42e5-b03d-3fc07d5c5148 Version: $LATEST
/var/runtime/botocore/vendored/requests/api.py:72: DeprecationWarning: You are using the get() function from 'botocore.vendored.requests'. This dependency was removed from Botocore and will be removed from Lambda after 2021/01/30. https://aws.amazon.com/blogs/developer/removing-the-vendored-version-of-requests-from-botocore/. Install the requests package, 'import requests' directly, and use the requests.get() function instead.
DeprecationWarning
[{'MetricName': 'RabbitMQQueueLength', 'Value': 295}]
END RequestId: d3945a14-82e5-42e5-b03d-3fc07d5c5148
You can see that I'm able to interact with the RabbitMQ API just fine--the function hangs when trying to post the metric.
The lambda function uses the IAM role put-custom-metric, which uses the policies recommended here, as well as CloudWatchFullAccess for good measure.
Resources on my internal load balancer, where my RabbitMQ server lives, are protected by a VPN, so it's necessary for me to associate this function with the proper VPC/security group. Here's how it's setup right now (I know this is working, because otherwise the communication with RabbitMQ would fail):
I read this post where multiple contributors suggest increasing the function memory and timeout settings. I've done both of these, and the timeout persists.
I can run this locally without any issue to create the metric on CloudWatch in less than 5 seconds.
#noxdafox has written a brilliant plugin that got me most of the way there, but at the end of the day I ended going for a pure lambda-based solution. It was surprisingly tricky getting the cloud watch plugin running with docker, and after I had trouble with the container shutting down its services and stopping processing of the message queue. Additionally, I wanted to be able to normalize queue count by the number of worker services in my ECS cluster, so I was going to need to connect to at least one AWS resource from within my VPC anyhow. I figured best to keep everything simple and in the same place.
import os
import boto3
from botocore.vendored import requests
USER = os.getenv("RMQ_USER")
PASSWORD = os.getenv("RMQ_PASSWORD")
cloudwatch_client = boto3.client(service_name='cloudwatch', endpoint_url="https://MYCLOUDWATCHURL.monitoring.us-east-1.vpce.amazonaws.com")
ecs_client = boto3.client(service_name='ecs', endpoint_url="https://vpce-MYECSURL.ecs.us-east-1.vpce.amazonaws.com")
def get_message_count(user=USER, password=PASSWORD, domain="rabbitmq.stockbets.io/api/queues"):
url = f"https://{user}:{password}#{domain}"
res = requests.get(url)
message_count = 0
for queue in res.json():
message_count += queue["messages"]
print(f"message count: {message_count}")
return message_count
def get_worker_count():
worker_data = ecs_client.describe_services(cluster="prod", services=["worker"])
worker_count = worker_data["services"][0]["runningCount"]
print(f"worker_count count: {worker_count}")
return worker_count
def lambda_handler(event, context):
message_count = get_message_count()
worker_count = get_worker_count()
print(f"msgs per worker: {message_count / worker_count}")
metric_data = [
{'MetricName': 'MessagesPerWorker', "Unit": "Count", 'Value': message_count / worker_count},
{'MetricName': 'NTasks', "Unit": "Count", 'Value': worker_count}
]
cloudwatch_client.put_metric_data(MetricData=metric_data, Namespace="RabbitMQ")
Creating the VPC endpoints was easier that I thought it would be. For Cloudwatch, you want to search for the "monitoring" VPC endpoint during the creation step (not "cloudwatch" or "logs". Searching for "ecs" gets you what you need for the ECS connect.
Once your lambda is us you need to configure the metric and accompanying alerts, and then relate those to an auto-scaling policy, but that's probably beyond the scope of this post. Leave a comment if you have questions on how I worked that out.
Only reason you might want to use a Lambda function to achieve your goal is if you do not own the RabbitMQ cluster. The fact your logic is hanging during communication suggests a network issue mostly due to mis-configured security groups.
If you can change the cluster configuration, I'd suggest you to install and configure the CloudWatch metrics exporter plugin which does most of the heavy-lifting work for you.
If your cluster runs on Docker, I believe the custom Docker file to be the best solution. If you run your Docker instances in AWS via ECS/Fargate, the plugin should be able to automatically infer the credentials from the Task Role through ExAws. Otherwise, just follow the README instructions on how to set the credentials yourself.

AWS Lambda fails for torch (python)

All I'm trying to do is create a flask app for Places365 and deploy it as an AWS Lambda with API. While everything works fine on my EC-2 instance, Lambda keeps failing with
"No module named 'torch': ModuleNotFoundError" error.
Initially, when I tried to include the torch as part of my virtual environment, Lambda kept failing with "No space left" error. So, I uninstalled torch from my virtual environment, redeployed the function and added PyTorch layer (arn:aws:lambda:us-east-1:934676248949:layer:pytorchv1-py36:2) to the function. Still, it fails with "No module named 'torch': ModuleNotFoundError" error
Also, I used Zappa for Lambda deployment
It would be great if someone can share their experience of deploying torch to Lambda
I was able to fix it. Below is what I did
ARN of pytorch layer that I used:
arn:aws:lambda:us-east-1:934676248949:layer:pytorchv1-py36:2
Added the following code to my python Lambda function
sys.path.insert(1, '/opt')
import unzip_requirements
import torch

lambdas fail to log to CloudWatch

Situation - I have a lambda that:
is built with Node.js v8
has console.log() statements
is triggered by SQS events
works properly (the downstream system receives all messages, AWS X-Ray can see those executions)
Problem:
this lambda does not log anything!
But if the same lambda is called manually (using "Test" button) - all logging statements are visible in CloudWatch.
My lambda is based on this tutorial: https://www.jeremydaly.com/serverless-consumers-with-lambda-and-sqs-triggers/
A very similar situation occurs if the lambda was called from within another lambda (recursion). Only the first lambda logs stuff (started manually), but every next lambda in the recursion chain does not log anything.
an example can be found here:
https://theburningmonk.com/2016/04/aws-lambda-use-recursive-function-to-process-sqs-messages-part-1/
any idea how to tackle this problem will be highly appreciated.

Resources