I'm new to Google Cloud and trying to understand the relationship between a Google Cloud endpoint and a back-end app on App Engine.
It looks like when I deploy my application (gcloud app deploy) I get a URL that looks something like https://my-service-dot-my-app#appspot.com/path/operation/etc. Is this URL going through the cloud endpoint, or right to the container?
When I call the service in this way I don't see any traffic to the cloud endpoint. In fact when I try to access the service using what I think is the cloud endpoint it just gives me a 404 (https://my-app#appspot.com/path/operation/etc). Why can't I access with the endpoint? Permissions?
My initial thought was that the endpoint was something separate that routes traffic to the back-end. However, when I do something like change the security configuration in openapi.yaml and just redeploy the endpoint definition (gcloud endpoints services deploy openapi.yaml), this does not seem to actually have any effect.
For example, the initial deployment had Firebase security. I removed it and redeployed the endpoint definition but security remains on when calling the service. Seems I have to redeploy the back-end to disable security.
Any insight would be appreciated.
Cloud Endpoint is a security layer in front of your API. It acts as a proxy and performs security checks (based on API Key, OAuth, SAML,...) and routing to the correct Endpoint. The endpoint definition is based on OpenAPI 2 (not 3, be careful!). There is new advance feature like rate limit and soon billing.
Initially integrated to AppEngine, this product has been open sourced and can be deployed on Cloud Run, Cloud Function and on GKE/Kubernetes. A similar paid and more powerful product is Apigee.
I wrote an article for using Endpoint deployed on Cloud Run, with API Key security and which route requests to Cloud Run, Cloud Function and App Engine.
Cloud Endpoint also offers a developer portal to allow your customer, prodiver and developer to view your API specification and to test it dynamically on line.
I hope these elements provide you a better overview of Cloud Endpoint to abstract your underlying API deployment.
I believe we need to address a few points before providing the correct way forward:
For your first question:
Is this URL going through the cloud endpoint, or right to the container?
Deploying an application to App Engine will generate an #appspot URL for the app. This URL is used to access the application directly, and it will remain available to the internet unless you enable Cloud IAP, or set any other restrictions to the service.
For your second question:
Why can't I access with the endpoint?
If you are referring to the https://my-app#appspot.com/path/operation/etc, there can be a lot of reasons for it to not work, it will depend on which step of the setup process you are.
Normally for setting up Cloud Endpoints with OpenAPI, with an App Engine backend, you need to limit access to the #appspot URL, but also deploy an Extensible Service Proxy (ESP) to Cloud Run to access it later.
Conclusion:
Now, for actually achieving this setup, I suggest you follow the Getting Started with Endpoints for App Engine standard environment.
As per the guide, the following is the full task list required to set Endpoints for an App Engine Standard backend, using Cloud Endpoints:
1 - Configure IAP to secure your app.
2 - Deploy the ESP container to Cloud Run.
3 - Create an OpenAPI document that describes your API, and configure
the routes to your App Engine.
4 - Deploy the OpenAPI document to create a managed service.
5 - Configure ESP so it can find the configuration for your service.
Keep in mind that once you set up the ESP configuration, any calls will need to go through the [YOUR-GATEWAY-NAME].a.run.app.
If you happen to be stuck in any particular step, please provide what you have done so far.
I hope this helps.
Is this URL going through the cloud endpoint, or right to the container?
App engines are container based deployments on Google's infrastructure. The url are created when you deploy it and please note its not API.
When I call the service in this way I don't see any traffic to the cloud endpoint
I dont think a Cloud Endpoint is created by default
One way to check if a Cloud Endpoint is created is to check if its API is enabled in your project or a service account is created in IAM page
To configure a Cloud Endpoint for App engine, following this procedure
Related
I'm facing issue with my multiple project solution in .net core webAPI. I've gatewayAPI which internally makes call to different microservices via http call.
Gateway API URI exposed to outer world which has domain as azure app name but the internal calls from gateway to microservices are configured with http://localhost:5001/{apiEndPoint} which is working fine in my local machine but after deploying it on azure app service I'm getting below error:
PostToServer call URL:'http://localhost:5001/api/authservice/authenticate' with Exception message An attempt was made to access a socket in a way forbidden by its access permissions. (localhost:5001).
Can someone please help me with this, I'm new to azure and learning on my own but could not find any solution for this yet.
PS: After going through some YouTube videos and blogs I got to know we have to use AKS but I'm not confident in that.
Would really appreciate any help on this issue.
The Gateway API you deployed to azure app service, it doesn't support custom port usage for 5001. Azure App Service only supports port 80|443(HTTP|HTTPS).
If you must use multiple ports in your actual project, then it is recommended to check whether Azure Cloud Service meets your needs. But it not the best choice.
The Best Practice:
Microservices architecture design
In short,create a Azure Gateway service, and your other microservice can be deployed in any where.(azure app service, vm or aks)
You just make sure you can access your microservices in your internal or public network environment.
If you're just learning, or the app isn't actually used by a lot of users, you can try the following suggestions:
Use SignalR (not azure signalr) to replace the websocket in your current project.
You have on azure app service, you can deploy your Gateway API Application to app service, and your other microservices can be deployed to Virtual Application in azure app service.
I have the frontend and backend on cloud run, each whit his own service, but when I put "internal traffic" on the backend API, It doesn't work, give me 403 since the frontend and it is another service of the same project, and in the documentation says that internal means "only for the same project" so...
two services since the same project are not internal traffic?
I think that is because I use a custom domain and not the exact URL of the service but I am not sure because here says that the custom domains are allowed too.
So what do I have to do to auth my frontend service on cloud run?
I tried whit JWT auth, but there is a better option, isn't it
Cloud Run services set to internal only accepts traffic coming from the VPC network. In order to connect to a Cloud Run service that's serving internal traffic, the connecting service must be attached to a VPC connector. In this case, you need to setup Serverless VPC access connector as mentioned in this note:
For requests from other Cloud Run services or from Cloud Functions in the same project, connect the service or function to a VPC network and route all egress through the connector, as described in Connecting to a VPC network. Note that the IAM invoker permission is still enforced.
For authenticating between service-to-service, you can simply fetch an ID token from the Compute medatada server. You can do that on any GCP compute environment (Cloud Run, App Engine, Compute Engine, etc.). You can follow the steps provided in this documentation.
two services since the same project are not internal traffic?
Two services in the same project should be considered as internal traffic.
I believe what you need to do is follow the authentication steps with token as recommended here (service to service authentication):
https://cloud.google.com/run/docs/authenticating/service-to-service
https://cloud.google.com/run/docs/securing/service-identity#per-service-identity
Please note that even though you've set the ingress traffic to internal, the IAM role cloud run invoker is still needed for the service account.
I have an API that runs on an Azure app service which is exposed through Azure API Management. Is there a way to tell if any requests are hitting the app service URL directly without going through the API Management service?
In my opinion, apim can't record those requests which hitting the app service url directly because these requests have no relationship with apim. If you want to record these requests, you need to modify the api in your code.
For example, you can add a parameter with a specific value in api management and when your code recieve this parameter, you can check the value to know if it comes from apim.
I assume that you want to prevent calling the app service url directly. So I suggest you adding White list on your server so that only apim request can visit your server.
For adding access restriction, if you're using azure app service, you can learn about this ms document.
I believe if you were to enabled Application Insights on both the API Management and APP Service you can view the requests on Application Insights for the APP Service to tell which ones were direct calls and which ones came from API Management.
I have some doubts about which is the most appropiate way to allow access to my company backend services from public Clouds like AWS or Azure, and viceversa. In our case, we need an AWS app to invoke some HTTP Rest Services exposed in our backend.
I came out with at least two options:
The first one is to setup an AWS Virtual Private Cloud between the app and our backend and route all traffic through it.
The second option is to expose the HTTP service through a reverse proxy and setup IP filtering in the proxy to allow only income connections from AWS. We donĀ“t want the HTTP Service to be public accesible from the Internet and I think this is satisfied whether we choose one option or another. Also we will likely need to integrate more services (TCP/UDP) between AWS and our backend, like FTP transfers, monitoring, etc.
My main goal is to setup a standard way to accomplish this integration, so we don't need to use different configurations depending on the kind of service or application.
I think this is a very common need in hybrid cloud scenarios so I would just like to embrace the best practices.
I would very much appreciate it any kind of advice from you.
Your option #2 seems good. Since you have a AWS VPC, you can get an IP to whitelist by your reverse proxy.
There is another approach. That is, expose your backends as APIs which are secured with Oauth tokens. You need some sort of an API Management solution for this. Then your Node.js app can invoke those APIs with the token.
WSO2 API Cloud allows your to create these APIs in the cloud and run the api gateway in your datacenter. Then the Node.js api calls will hit the on-prem gateway and it will validate the token and let the request go to the backend. You will not need to expose the backend service to the internet. See this blog post.
https://wso2.com/blogs/cloud/going-hybrid-on-premises-api-gateways/
I have created a CDN endpoint at [id].vo.msecnd.net, and I have deployed a production mvc4 cloud service web role.
It has images in a root-level /cdn folder, but I cannot get those images to load via cdn. I can access them via direct URL. For example, this works:
[site].cloudapp.net/cdn/eb303.gif
but not:
[id].vo.msecnd.net/eb303.gif
The cdn endpoint is enabled and set up under the hosted service that the web role is in. The documentation I have been reading indicates that nothing more is required. I am using a bizspark license, but as far as I can tell that should include CDN endpoints.
Is there a step I am missing?
Thanks!
Sometime it may take up to 1 hour before your CDN endpoint is ready to server your content. It does work with any kind of deployment staging or production.
Do you still have the problem? If you still have the problem you may need to contact Azure Support because if you have enabled the CDN, you should be good to go.