Schedule completed event is not supported by stripe - stripe-payments

I am using stripe schedules to handle the downgrade of the subscriptions. I need to test it, however, the events subscription_schedule.completed is not supported by stripe CLI(Even I upgraded to latest verison).
Theres no way on stripe dashboard to compelte the schedule before time.I need to do handle stuff on my backend after this event is triggered, what should I do?

subscription_schedule.completed events are sent when a Subscription Schedule transitions to having a status of completed. This happens when the Schedule has end_behavior: cancel and was allowed to end naturally by completing its last phase (not through an API call to cancel it).
You can trigger subscription_schedule.completed events yourself by updating or creating a new Subscription Schedule that ends a few seconds in the future and has end_behavior: cancel. Here's an example of how you would do this using curl (replacing FUTURE_TIMESTAMP with one that is a few seconds in the future, and using your own keys and object IDs):
curl https://api.stripe.com/v1/subscription_schedules \
-u sk_test_XXX: \
-d customer=cus_XXX \
-d start_date=now \
-d end_behavior=cancel \
-d "phases[0][items][0][price]"=price_XXX \
-d "phases[0][items][0][quantity]"=1 \
-d "phases[0][end_date]"={{FUTURE_TIMESTAMP}}

Related

Azure function execution time as measured at the client significantly longer than duration reported in logs - Is there some setting to improve this?

Doing abusive testing on my functions, and I noticed what seems like a lot of overhead difference in the execution times reported in the Azure logs vs what I am measuring at the client side. On average, there is 200-300 milliseconds more measured at the client than reported in the logs. Is this expected behavior, or is there some setting I am missing that could cause this?
Azure Logs:
Connected!
2022-09-22T16:12:58.677 [Information] Executing 'TheFunction' (Reason='This function was programmatically called via the host APIs.', Id=0192e3cc-a5b5-4642-baa3-af1288167530)
2022-09-22T16:12:58.693 [Information] Enter THE Function
2022-09-22T16:12:58.694 [Information] Call Init()
2022-09-22T16:12:59.447 [Warning] Enter CreateRepository, No cached repository available!
2022-09-22T16:13:03.778 [Information] Executed 'TheFunction' (Succeeded, Id=0192e3cc-a5b5-4642-baa3-af1288167530, Duration=5137ms)
2022-09-22T16:13:14.353 [Information] Executing 'TheFunction' (Reason='This function was programmatically called via the host APIs.', Id=c0288ce8-3961-4c48-9f2e-4f39e0137e1d)
2022-09-22T16:13:14.354 [Information] Enter THE Function
2022-09-22T16:13:14.354 [Warning] Enter CreateRepository, Using Cached Repository
2022-09-22T16:13:14.357 [Information] Executed 'TntFunction' (Succeeded, Id=c0288ce8-3961-4c48-9f2e-4f39e0137e1d, Duration=7ms)
On the first call, it is a cold start, so we get a hefty 5137ms. The second call sends the same payload, so with caching we turn in a respectable 7ms.
On the client side, testing with curl, I get the following times:
$ time cat data/1663863194_request | curl -s -X POST -d#- -H 'Content-Type: application/json' https://some-azure-url
real 0m5.620s
$ time cat data/1663863194_request | curl -s -X POST -d#- -H 'Content-Type: application/json' https://some-azure-url
real 0m0.282s
Currently using consumption plan with SKU Y1 which I believe is Windows based.

Can't do GET request for pull subscription from local fake PubSub Server

TL;DR: What is the correct way to send a GET Request to do a pull subscription from a Pub/Sub server. What is the correct URL to use?
I am running a local Google Pub/Sub fake using gcloud beta emulators pubsub start, I have been able to successfully publish to it using a ruby script I wrote, however I have been unable to do a pull subscription.
I am aiming to accomplish this just using a GET request, not a script. One of the issues I've found is there's tons of documentation on doing pull subscriptions with a client or gcloud, but very little on how to access the server by URL. Perhaps I am misunderstanding what is possible - but I want to publish a message to pub/sub using a ruby client, and then use Postman to do a GET request to the pub/sub server to retrieve the message.
I am fairly certain the issue is with how I am making a get request, but I have reproduced everything else below for context
Ruby Publishing Code
require "google/cloud/pubsub"
require 'json'
class Publisher
def publish(event)
puts "1==============>>>>>>>> publishing..."
pubsub = Google::Cloud::PubSub.new(
project_id: "grpc-demo-proj",
emulator_host: "localhost:8085"
)
topic_id = "event_topic"
topic = pubsub.topic topic_id
begin
topic.publish_async "receive_event#event",
event: JSON.generate(event) do |result|
raise "Failed to publish the message." unless result.succeeded?
puts "2==============>>>>>>>> Message published asynchronously."
end
# Stop the async_publisher to send all queued messages immediately.
topic.async_publisher.stop.wait!
rescue StandardError => e
puts "3==============>>>>>>>> Received error while publishing: #{e.message}"
end
end
end
This seems to work, as I get
1==============>>>>>>>> publishing...
DEBUG GRPC : calling localhost:8085:/google.pubsub.v1.Publisher/GetTopic
DEBUG GRPC : calling localhost:8085:/google.pubsub.v1.Publisher/Publish
2==============>>>>>>>> Message published asynchronously.
In my terminal.
I also have the Pub/Sub server running using the following shell scripts.
#!/bin/bash
# Kill the existing process if it's already running
if [ "$(lsof -i:8085)" ]; then
kill $(lsof -t -i:8085)
fi
# Kick off the new process
gcloud beta emulators pubsub start --project=grpc-demo-proj
# Connect to environment variables
$(gcloud beta emulators pubsub env-init)
PubSub Setup Script
#!/bin/bash
# Wait for the pubsub emulator to boot up
sleep 7
while [[ ! "$(lsof -i:8085)" ]]
do
echo '#===> PUBSUB EMULATOR SETUP: Waiting for PubSub Emulator to start...'
sleep 3
done
# Create topics
curl --header "Content-Type: application/json" \
--request PUT \
http://localhost:8085/v1/projects/grpc-demo-proj/topics/event_topic
# Create test subscriptions for each topic
curl --header "Content-Type: application/json" \
--request PUT \
--data '{"topic": "projects/grpc-demo-proj/topics/event_topic"}' \
http://localhost:8085/v1/projects/grpc-demo-proj/subscriptions/event_topic.test_sub1
Again. These seem to work well.
Where I have trouble...
is doing a pull subscription from the pub/sub server using a GET request (Either from PostMan, or just in browser's URL bar)
http://localhost:8085/v1/projects/grpc-demo-proj/subscriptions/event_topic.test_sub1:pull
Returns
{
"error": {
"code": 400,
"message": "Invalid [subscriptions] name: (name=projects/grpc-demo-proj/subscriptions/event_topic.test_sub1:pull)",
"status": "INVALID_ARGUMENT"
}
}
But the subscription name is valid, as
http://localhost:8085/v1/projects/grpc-demo-proj/subscriptions/event_topic.test_sub1
returns
{
"name": "projects/grpc-demo-proj/subscriptions/event_topic.test_sub1",
"topic": "projects/grpc-demo-proj/topics/event_topic",
"pushConfig": {},
"ackDeadlineSeconds": 10,
"messageRetentionDuration": "604800s"
}
Which would seem to confirm the server is working, and the topics and subscriptions have been successfully created.
Although -NOT- the solution I am looking for, I tried using gcloud in command line:
bgc#jadzia:~$ gcloud beta pubsub subscriptions pull test_sub1
ERROR: (gcloud.beta.pubsub.subscriptions.pull) NOT_FOUND: Resource not found (resource=test_sub1).
Even though other sources seem to confirm this subscription does exist.
While it could possibly be an issue with Ruby incorrectly saying it successfully published the message, or something is wrong with the server. I suspect I'm just not doing the GET request correctly. I've tried several variations on the above GET request but won't list them all here.
So, without using a script - how can I get a message back from the pub/sub server? Ideally a URL for a GET request I can plug into PostMan, but command-line based solutions may also work here.
I did reproduce your local fake Pub/sub server using the all the scripts you posted. As you commented I used POST instead of GET and got a response. Pub/Sub subscriptions pull reference
POST https://pubsub.googleapis.com/v1/{subscription}:pull
POST request for subscriptions pull:
curl --header "Content-Type: application/json" \
--request POST \
--data "{
"maxMessages": "1"
}" \
http://localhost:8085/v1/projects/my-project/subscriptions/event_topic.test_sub1:pull
Output of subscriptions pull:
Pubsub Message is encoded in base 64. Take note that I ran everything (creation of pubsub server, topics, subscriber, publish of message, pulling of message) in the Google cloud shell.
EDIT 1:
As Brian mentioned, this is the request that is working for him. This works on my testing as well!
curl --header "Content-Type: application/json" \
--request POST \
localhost:8085/v1/projects/my-prject/subscriptions/event_topic.test_sub1:pull?maxMessages=5
Output:

Experiencing high latency when querying DynamoDB from an EC2 instance

I have a basic NodeJS web application which scans a DynamoDB table with only ~10 items. When I run the application on locally on my machine, the operation takes less than 1 second.
However, when I deploy the application on an EC2 instance, the same operation takes almost 5 seconds. The EC2 instance(t2.micro) and DynamoDB table are in the same region. I have also enabled VPC Endpoint Gateway service for DynamoDB, but the latency remains the same.
Here are the curl requests to test the performance:
curl -X POST http://localhost:9000/login -H 'Content-Type: application/json' -d '{ "email": "xyz#gmail.com", "password": "admin", "type": "talent" }' -s -o /dev/null -w "%{time_starttransfer}\n" 0.394470
curl -X POST http://EC2_IP_ADDRESS:9000/login -H 'Content-Type: application/json' -d '{ "email": "xyz#gmail.com", "password": "admin", "type": "talent" }' -s -o /dev/null -w "%{time_starttransfer}\n" 5.207561
Please help me understand what can be causing these latencies and how do I achieve low latency while querying DynamoDB from an EC2 instance.
You need to find what is the time taken by the database and the network. To do that you can check the following
Run your query directly on the DynamoDB to check the amount of time the database is taking. Check whether the query itself
Check whether the turnaround time difference for your query between code exection on EC2 server and your local machine the same with or without VPC endpoint?
The extended latency is very likely not due to DynamoDB if you think about the network hops
EC2 => EC2 => dynamodb (curl http://localhost/...)
Your desktop => internet => EC2 => dynamodb (curl http://EC2_IP/...)
It's possible you would experience a similar high latency if you just bypass dynamodb call e.g. comment out the code that makes a db call and return a mock response - I suspect you'll get a similar latency as #2

How to send a push notification in unificationengine

How to send a push notification to a channel in unificationengine?
If a user is already created the channel via UE.
Example for pushing messages to UE
curl -XPOST https://apiv2.unificationengine.com/v2/push/notification -u CONNECTOR_KEY:CONNECTOR_SECRET –data “{ \"channel\”: \“6d2d2111-d3b6-4ac9-8e67-55980a141fdd_sms\”, \“data\”: \“test message\”}“ -k
The documentation for push messages is given in Connector section in https://developer.unificationengine.com.

How can I know how long a Jenkins job has been in the wait queue after the job is finished?

For the statistics, I want to see how long the job is in the waiting queue, therefore I can tune the system to make the job is run in time.
If the job is just in queue, it is possible to find in waiting queue in front page see How can I tell how long a Jenkins job has been in the wait queue?
Or the http://<jenkins_url>/queue/api/json?pretty=true
Is it possible to check somewhere to get "Time waiting in queue" for the specific job after the job is finished ?
Will be nice if it can be gotten in public jenkins API.
// got answer from colleague
It can be achieved by installing Jenkins Metrics Plugin, after it is installed, in the build result page, you will see
Jenkins REST API: Then you can get wait time in queue from http://localhost:8080/job/demo/1/api/json?pretty=true&depth=2 . queuingDurationMillis is the data I wanted.
"actions" : [
{
"queuingDurationMillis" : 33,
"totalDurationMillis" : 3067
}
],
Groovy script: Also we can get this data in groovy via internal data, check below code in Jenkins Script console http://localhost:8080/script
job = hudson.model.Hudson.instance.getItem("demo")
build = job.getLastBuild()
action = build.getAction(jenkins.metrics.impl.TimeInQueueAction.class)
println action.getQueuingDurationMillis()
You can see the demo using docker by running below and open in browser for demo job
docker run -it -p 8080:8080 larrycai/jenkins-metrics

Resources