My jobs are not being executed by the queue lambda - laravel-vapor

My understanding is that vapor sets the config for the queue driver to sqs so all jobs are executed on the queue lambda. I am dispatching jobs and can see clearly they are not running on the queue lambda. Did I miss something? env.production does not have an entry for queue_driver as assuming vapor injects it as it does in my other projects.
//In a controller
RenamePhoto::dispatch($photo_id,$name);
class RenamePhoto implements ShouldQueue // so we are clear here
//RenamePhoto constructor
Log::info("Adding to the Rename Process ". $photo_id);
//RenamePhoto handle
Log::info("Processing rename of photo ". $this->photo_id. ' to '.$this->name);
Both those logs show up on the HTTP logs
Laravel Framework 9.24.0
Laravel Vapor 1.42.0

This was caused by me missing an update to Laravel in config/queue.php.
QUEUE_DRIVER became QUEUE_CONNECTION and a bunch of values needed by aws were added as env values in the sqs section.
Hope this helps someone moving an older project to Vapor.
See https://github.com/laravel/laravel/blob/9.x/config/queue.php

Related

How to increase the AWS lambda to lambda connection timeout or keep the connection alive?

I am using boto3 lambda client to invoke a lambda_S from a lambda_M. My code looks something like
cfg = botocore.config.Config(retries={'max_attempts': 0},read_timeout=840,
connect_timeout=600) # tried also by including the ,
# region_name="us-east-1"
lambda_client = boto3.client('lambda', config=cfg) # even tried without config
invoke_response = lambda_client.invoke (
FunctionName=lambda_name,
InvocationType='RequestResponse',
Payload=json.dumps(request)
)
Lambda_S is supposed to run for like 6 minutes and I want lambda_M to be still alive to get the response back from lambda_S but lambda_M is timing out, after giving a CW message like
"Failed to connect to proxy URL: http://aws-proxy..."
I searched and found someting like configure your HTTP client, SDK, firewall, proxy or operating system to allow for long connections with timeout or keep-alive settings. But the issue is I have no idea how to do any of these with lambda. Any help is highly appreciated.
I would approach this a bit differently. Lambdas charge you by second, so in general you should avoid waiting in them. One way you can do that is create an sns topic and use that as the messenger to trigger another lambda.
Workflow goes like this.
SNS-A -> triggers Lambda-A
SNS-B -> triggers lambda-B
So if you lambda B wants to send something to A to process and needs the results back, from lambda-B you send a message to SNS-A topic and quit.
SNS-A triggers lambda, which does its work and at the end sends a message to SNS-B
SNS-B triggers lambda-B.
AWS has example documentation on what policies you should put in place, here is one.
I don't know how you are automating the deployment of native assets like SNS and lambda, assuming you will use cloudformation,
you create your AWS::Lambda::Function
you create AWS::SNS::Topic
and in its definition, you add 'subscription' property and point it to you lambda.
So in our example, your SNS-A will have a subscription defined for lambda-A
lastly you grant SNS permission to trigger the lambda: AWS::Lambda::Permission
When these 3 are in place, you are all set to send messages to SNS topic which will now be able to trigger the lambda.
You will find SO answers to questions on how to do this cloudformation (example) but you can also read up on AWS cloudformation documentation.
If you are not worried about automating the stuff and manually tests these, then aws-cli is your friend.

Is there a memory limit for User Code Deployment on Hazelcast Cloud? (free version)

I'm currently playing with Hazelcast Cloud. My use case requires me to upload 50mb of jar file dependencies to Hazelcast Cloud servers. I found out that the upload seems to give up after about a minute or so. I get an upload rate of about 1mb a second, it drops after a while and then stops. I have repeated it a few times and the same thing happens.
Here is the config code I'm using:
Clientconfig config = new ClientConfig();
ClientUserCodeDeploymentConfig clientUserCodeDeploymentConfig =
new ClientUserCodeDeploymentConfig();
// added many jars here...
clientUserCodeDeploymentConfig.addJar("jar dependancy path..");
clientUserCodeDeploymentConfig.addJar("jar dependancy path..");
clientUserCodeDeploymentConfig.addJar("jar dependancy path..");
clientUserCodeDeploymentConfig.setEnabled(true);
config.setUserCodeDeploymentConfig(clientUserCodeDeploymentConfig);
ClientNetworkConfig networkConfig = new ClientNetworkConfig();
networkConfig.setConnectionTimeout(9999999); // i.e. don't timeout
networkConfig.setConnectionAttemptPeriod(9999999); // i.e. don't timeout
config.setNetworkConfig(networkConfig);
Any idea what's the cause, maybe there's a limit on the free cloud cluster?
I'd suggest using the smaller jar because this feature, the client user code upload, was designed for a bit different use cases:
You have objects that run on the cluster via the clients such as Runnable, Callable and Entry Processors.
You have new or amended user domain objects (in-memory format of the IMap set to Object) which need to be deployed into the cluster.
Please see more info here.

Related Scheduler Job not created-Firebase Scheduled Function

I have written a scheduled function in node.js using typescript that successfully deploys.The related pub/sub topic gets created automatically but somehow the related scheduler job does not.
This is even after getting these lines
i scheduler: ensuring necessary APIs are enabled...
i pubsub: ensuring necessary APIs are enabled...
+ scheduler: all necessary APIs are enabled
+ pubsub: all necessary APIs are enabled
+ functions: created scheduler job firebase-schedule-myFunction-us-central1
+ functions[myFunction(us-central1)]: Successful create operation.
+ Deploy complete!
I have cloned the sample at https://github.com/firebase/functions-samples/tree/master/delete-unused-accounts-cron which deploys and automatically creates both the related pub/sub topic and scheduler job.
What could i be missing?
Try to change .timeZone('utc') (per the docs) to .timeZone('Etc/UTC') (also per the self-contradictory docs).
It seems that when using the 'every 5 minutes' syntax, the deploy does not create the scheduler job.
Switching to the cron syntax solved the problem for me
Maybe your cron syntax isn't correct. There are some tools to validate the syntax
Check your firebase-debug.log
At some point,it will invoke a POST request to:
>> HTTP REQUEST POST https://cloudscheduler.googleapis.com/v1beta1/projects/*project_name*/locations/*location*/jobs
This must be a 200 response.

How can I change the name of a task that Celery sends to a backend?

I have built a queue system using Celery that accepts web requests and executes some tasks to act on those requests. I'm using Redis as the backend for Celery, but I imagine this question would apply to all backends.
Celery is returning the task name as celery-task-meta-<task ID> and storing it in the backend. This is meaningless to me. How can I change the name of the result that celery sends to Redis? I have searched through all of Celery's documentation to try to figure out how to do this.
The Redis CLI monitor is showing that Celery is using the SETEX method and sending the following input:
"SETEX" "celery-task-meta-dd32ded3-00aa-4884-8b21-42f8332e7fac"
"86400" "{\"status\": \"SUCCESS\", \"result\": {\"mode\": \"staging\",
\"123\": 50}, \"traceback\": null, \"children\": [], \"task_id\":
\"dd32ded3-00aa-4884-8b21-42f8332e7fac\", \"date_done\":
\"2019-05-09T16:44:12.826951\", \"parent_id\":
\"2e99d958-cd5a-4700-a7c2-22c90f387f28\"}"
The "result": {...} that you can see in the SETEX command above is what the task returns. I would like the SETEX to be more along the lines of:
"SETEX" "mode-staging-123-50-SUCCESS" "{...}", so that when I view all my keys in Redis, the name of the key is informational to me.
Here's another example view of the keys in my Redis cache that are meaningless:
You can't change this. The task key is created by ResultConsumer class which Redis backend uses. ResultConsumer then delegates creation of the task key to BaseKeyValueStoreBackend class. The get_key_for_task method which actually creates the key uses hardcoded task_keyprefix set to celery-task-meta-. So, to change the behaviour, you would have to subclass these classes. There's not configuration option for it.

lambdas fail to log to CloudWatch

Situation - I have a lambda that:
is built with Node.js v8
has console.log() statements
is triggered by SQS events
works properly (the downstream system receives all messages, AWS X-Ray can see those executions)
Problem:
this lambda does not log anything!
But if the same lambda is called manually (using "Test" button) - all logging statements are visible in CloudWatch.
My lambda is based on this tutorial: https://www.jeremydaly.com/serverless-consumers-with-lambda-and-sqs-triggers/
A very similar situation occurs if the lambda was called from within another lambda (recursion). Only the first lambda logs stuff (started manually), but every next lambda in the recursion chain does not log anything.
an example can be found here:
https://theburningmonk.com/2016/04/aws-lambda-use-recursive-function-to-process-sqs-messages-part-1/
any idea how to tackle this problem will be highly appreciated.

Resources