I have set up Stripe payment intents with webhooks and tested in my development environment on my laptop using the Stripe CLI.
After that was working I added it to our staging environment on a server. Both are using the same Stripe test account.
Now when I am testing on my laptop it seems that the staging environment is receiving the same webhook requests. This is causing errors to show up in the logs as it's looking for data that doesn't exist in the staging environment.
Is it possible to trigger the webhook when the request comes from the staging environment?
Related
I Have an App Service published and I have created 2 Deployment Slots to test an update. One of them with the configuration cloned and another without clone the configuration
When I test my local project works fine
The publisehd api works fine
But when I publish in any of the slots I get 500 internal server error
And I donĀ“t see any error log here
I don't know where I can see some more information about this problem that I don't understand
Any idea, please?
Thanks
Create an API and publish it with the deployment slots, one with cloned configuration and other without cloned.
Below is the response in local environment.
Below is the response from API after deployed in Azure
*Below is the response in the local browser from API which is published azure *
We need to use the "Ocp-Apim-Subscription-Key" and value "d*****e"
I have an ExpressJS app running in App Engine standard environment. I have a workflow setup with two GCP projects such that pushes to develop branch deploy to the staging environemnt (1st GCP project) and pushes to main deploy to the live environment (2nd GCP project). During development we experienced no issues with App Engine in the staging project. However, the same code and service account permissions are causing this GaxiosError: Could not refresh access token: Unsuccessful response status code when attempt to upload to storage buckets and this 16 UNAUTHENTICATED: Failed to retrieve auth metadata with error: Could not refresh access token: Unsuccessful response status code. Request failed with status code 500 when attempting to access a secret from the Google Secret Manager. We have another backend service setup in the same way and on the same projects. It also produces the same results (working in stage but not in live environment). I made sure to match the service account permission for the default app engine service account with no luck. Please help with this issue as this is becoming increasingly frustrating as I realize it may be out of my control.
I've been successfully deploying my node.js GAE web app for months using gcloud app deploy. It's been a month or so since my last deployment and I've made a few updates since then that I want to get out. So I did my usual
gcloud app deploy
and it uploaded the files and then failed, giving me this error:
(gcloud.app.deploy) Error Response: [5] failed to getGaiaID for "<SERVICE NAME WAS HERE"": generic::not_found: Account disabled: 810593457260
At first I thought it was a payment issue - but my payment info is up to date. The only major change to the code I recall is that between this deploy and the last deployment, I started versioning the project with git and pushing to github.
Does anyone have any ideas? In particular, is there any reason Git or GitHub .git would interfere with a gcloud deployment?
Thanks
So, oddly enough, my account ended up not being disabled. My last three attempts at deploying all occurred at unsecured network locations (airport, hotel, convention center). Apparently, Google gives a disabled account warning when you try to deploy over an unsecure network
Therefore the command gcloud app deploy will not be successful when performed over an unsecure internet network.
Later, attempting to make a deploy over unsecure networks disabled the project in GAE. Since billing was in good standing, they continued to serve my content (and bill for it) but changes to the service were disabled.
To enable them again, follow the instructions:
To enable a service account, at minimum the user must be granted the Service Account Admin role (roles/iam.serviceAccountAdmin) or the Editor basic role (roles/editor)
In the Cloud Console, go to the Service accounts page.
Select the project
Click the name of the service account that you want to enable
Under Service account status, click Enable service account, then click Enable to confirm the change.
Other resources:
How to understand the service accounts
How to create and manage service accounts
I'm trying to deploy my Dialogflow Agent with Genesys but when I link it to a specific environment the webhook via cloud function stops working! It only works when I link it with the Draft env. Does someone have the same issue? env configuration
I've setup a continuous deployment between Bitbucket and the new Azure portal (preview). It works great but when I checked Bitbucket, I noticed that it created a services rather than a webhook but in the service section and the following message is displayed:
In the future, you will not be able to create POST or Pull Request POST services from this screen, as Bitbucket's new and improved webhooks will replace these services. Existing POST services will continue to function as expected for now. To create a new webhook, refer to the documentation for Bitbucket's updated webhooks.
But I can't figure out how to create a webhook in the new Azure Portal. Every articles on the web that I have found are all explaining it based on the old ('current') portal.
Any ideas on how I can create a webhook instead? Not critical since it's working but considering the message displayed in Bitbucket, I thought I'd look into it now rather than wait for bitbucket to disable this feature.
Thanks.
Login to Old Azure Console, https://manage.windowsazure.com
Goto Configure
Find out DEPLOYMENT TRIGGER URL
You can use this trigger URL to setup webhook in BitBucket
But I don't think you have to worry much as Azure will automatically upgrade their services to do this automatically, that is what managed cloud means, we are paying them to manage this.