Does anyone know if AWS RDS generates an event for SSL expiry. 2 years ago a certificate expired due to the change that happens every 5 years. I would like to trigger an event that notifies me when we are close to expiry each day. I can't seem to find any documentation on the event pattern or ID, does one exist? My other option is to run a lambda that does a describe-certificate every so often and alerts me if it's within x amount of days of expiry. I would prefer the former event driven approach over the latter.
If anyone has any suggestions that would be appreciated. Thanks!
It's best to look for expiry notification with a lambda function
Related
Use case:
User creates meeting appointment and should be notified in 24 hours/1 hour/5 minutes before appointment.
Current implemetation:
When appointment is created, it is saved in DynamoDB with ttl (appointment time - 24 hours)
When ttl is expired, DynamoDB removes this item.
There is a lambda that listens to DynamoDB stream event and is triggered by previous action. There are 3 additional boolean flags in item: 24hours, 1hour, 5minutes. When item is removed before 24 hours, this lambda sets 24hours flag to true (in order to know in the next step that push notification before 24 hours already has been sent) and saves it again with new ttl (appointment time - 1 hour), push notification is sent.
Same as in 2 and 3: ttl expired, lambda sets 1hour flag to true and sets new ttl (appointment time - 5 min) and saves item again, push notification is sent.
Again: ttl expired, push notification is sent.
Concern: DynamoDB does not guarantee that item will be removed exactly when ttl expired.
Is there any another solutions: more efficient than mine?
Current stack: AWS/Serverless Framework/NodeJS.
Recently(10-Nov-2022) AWS launched a new service called EventBridge Scheduler. I hope this will fulfill all of your requirements and it's serverless too. What you have to do is create 3 One-time schedule schedules in EventBridge Scheduler. Then set the target as your current AWS Lambda function. Then you don't need Amazon DynamoDB anymore. I hope this will answer your problem.
If you just want to use the Serverless stack then this is one of the best ways to do it. But if you want something more functional then you can create a bull queue server [https://www.npmjs.com/package/bull][1] and host it inside an EC2. It provides delayed jobs and queues.
One suggestion that I would like to give is to not directly use lambda to send notifications and instead create a queue and send notifications via that because if your system scales your lambdas will start getting throttled when it has to send thousands of notifications at a single point of time.
I need to build a microservice that scrapes a message once a day and persists it somewhere. It does not need to be accessible after 24 hours (it can be deleted). It doesn't really matter where or how, but I need to access it from an Express.js endpoint and return the message. Currently we use Redis and MongoDB for data persistence. It feels wrong to create a whole collection for one tiny service, and I'm not sure of an application of Redis that would fulfill this task. What's my best option? Open to any suggestions, thank you!
You can use YUGABYTE DB, and you can set TABLE LIVE= 24 Hours, then data will be deleted.
Redis provide an expiration mechanism out of the box. You can associate a timeout to a key, and it will be automatically deleted after the timeout has expired. Some official documentation here
Redis also provides logical databases, if you want to keep this expiring keys separated from the rest of your application. So you do not need to spin up another machine. Some official documentation here
I'm trying to implement a basic service that receives a msg and time in the future and once the time arrives, it prints the msg.
I want to implement it with Redis.
While investigating the capabilities of Redis I've found that I can use https://redis.io/topics/notifications on expired keys together with subscribing I get what I want.
But I am facing a problem, if the service is down for any reason, I might lose those expiry triggers.
To resolve that issue, I thought of having a queue (in Redis as well) which will store expired keys and once the service is up, it will pull them all the expired values, but for that, I need some kind of "stored procedure" that will handle the expiry routing.
Unfortunately, I couldn't find a way to do that.
So the question is: is it possible to implement with the current capabilities of Redis, and also, do I have alternatives?
I created a QnA bot using the Azure Bot service, and now I'm seeing data transfers out of my subscription of over 1 GB a day! I cannot figure out why, but since it's billable, I'd like to know why and how I can stop it.
The bot isn't being used yet, so no one is sending queries to it. I'm confused how this is happening.
Here's a screen shot of the graph for use in the last hour as well as a screen shot of the billing for the last few days showing the sudden jump in use.
Is this normal?
If you add AzureWebJobsDisableHomepage with a value of true, to the App settings, the data out will stop.
The setting itself is documented here: https://github.com/Azure/azure-webjobs-sdk-script/wiki/Configuration-Settings (although it doesn't provide an explanation for how this setting affects a bot specifically)
The reasoning behind what is happening is a little complex. Azure Functions are not normally "in memory" and available all the time. There is a small spinup time that is not ideal within a bot. So, apparently there is a job setup with consumption plan bots to ping it every 10 seconds (and by 'ping', i mean retrieve the root of the site). If you open the Log Stream, you'll see an http get request every 10 seconds. Adding AzureWebJobsDisableHomepage doesn't disable the request, but changes the status of what is returned from "OK" to "NoContent".
This will be added to the Bot Service arm template soon (so future consumption plan bots do not automatically accrue these data usages).
Today at a customer we analysed the logs of the previous weeks and we found the following issue regarding Windows Azure Service Bus Queues:
The request was terminated because the entity is being throttled.
Please wait 10 seconds and try again.
After verifying the code I told them to use the Transient Fault Handing Application Block (TOPAZ) to implement a retry policy like this one:
var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(2));
var retryPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(retryStrategy);
The customer answered:
"Ah that's great, so it will also handle the fact that it should wait
for 10 seconds when throttled."
Come to think about it, I never verified if this was the case or not. I always assumed this was the case. In the Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling assembly I looked for code that would wait for 10 seconds in case of throttling but didn't find anything.
Does this mean that TOPAZ isn't sufficient to create resilient applications? Should this be combined with some custom code to handle throttling (ie: wait 10 seconds in case of a specific exception)?
As far as throttling concerned, Topaz provides a set of built-in retry strategies, including:
- Fixed interval
- Incremental intervals
- Random exponential back-off intervals
You can also write your custom retry stragey and plug-it into Topaz.
Also, as Brent indicated, 10 sec wait is not mandatory. In many cases, retrying immediately may succeed without the need to wait. By default, Topaz performs the first retry immediately before using the retry intervals defined by the strategy.
For more info, see Ch.6 of the "Building Elastic and Resilient Cloud Apps" Developer's Guide, also available as epub/mobi/pdf from here.
If you have suggestions/feature requests for Topaz, please submit them via the uservoice.
As I recall, the "10 second" wait isn't a requirement. Additionally, TOPAZ I believe also has backoff capabilities which would help you over come thing.
On a personal note, I'd argue that simply utilzing something like TOPAZ is not sufficient to creating a truely resilient solution. Resiliency goes beyond just throttling on a single connection point, you'll also need to be able to handle failover to a redundant endpoint which TOPAZ won't do.