I have got a Azure AKS cluster running on Azure cloud. It is accessed by frontend and mobile via Azure API Management. My Front end app is outside of the AKS.
Is it possible to use Azure Dev Spaces in this setup to test my changes in the isolated environment?
I've created a new namespace in the AKS and created a separate deployment slot for testing environment on the forntend app, but I can't figure out how to create an isolated routing on Azure API management.
As a result I'd like to have an isolated environment which shares most of the containers on AKS, but uses my local machine to host one service which is under testing at the moment.
I assume you intend to use Dev Spaces routing through a space.s. prefix on your domain name. For this to work, you ultimately need a Host header that includes such a prefix as part of the request to the Dev Spaces ingress controller running in your AKS cluster.
It sounds like in your case, you are running your frontend as an Azure Web App and backend services in AKS. Therefore your frontend would need to include the necessary logic to do one of two things:
Allow the slot instance to customize the space name to use (e.g. it might call the AKS backend services using something like testing.s.default.myservice.azds.io)
Read the Host header from the frontend request and propagate it to the backend request.
In either case, you will probably need to configure Azure API Management to correctly propagate appropriate requests to the testing slot you have created. I don't know enough about how API Management configures routing rules to help on this part, but hopefully I've been able to shed some light on the Dev Spaces part.
Related
I successfully created the Azure Container App named my-app-name to host the Python Flask App.
The hostname for my app uses FQDN, so it looks like my-app-name.grayocean-1r2fd430h.centralus.azurecontainerapps.io.
I would like the hostname to be more user-friendly, for example my-app-name.azurecontainerapps.io (similar to the App Service, where it's named my-app-name.azurewebsites.net by default)
What should I do to make my custom hostname point to the Container App?
Unlike app service, in container apps there is a concept of an environment that groups all your apps that might need to communicate together whether it's for microservices or other designs you might have. the grayocean-1r2fd430h.centralus part is the unique part for your particular environment in this case.
However, From Container Apps you have 2 options here to give your app a custom domain:
1- You can use a different suffix for all the apps on your environment. i.e: replacing .grayocean-1r2fd430h.centralus.azurecontainerapps.io with some domain you might have, like .cornisto.io for example See https://learn.microsoft.com/en-us/azure/container-apps/environment-custom-dns-suffix for how to configure that.
2- You can assign custom domains per container app where each application can have its own set of custom domains that route to it. See this doc for how to set that up https://learn.microsoft.com/en-us/azure/container-apps/custom-domains-certificates
You could also use a service like Azure Front Door or API Management to proxy traffic to your application and the configuration of a custom domain would be on that end instead.
I'm new to Google Cloud and trying to understand the relationship between a Google Cloud endpoint and a back-end app on App Engine.
It looks like when I deploy my application (gcloud app deploy) I get a URL that looks something like https://my-service-dot-my-app#appspot.com/path/operation/etc. Is this URL going through the cloud endpoint, or right to the container?
When I call the service in this way I don't see any traffic to the cloud endpoint. In fact when I try to access the service using what I think is the cloud endpoint it just gives me a 404 (https://my-app#appspot.com/path/operation/etc). Why can't I access with the endpoint? Permissions?
My initial thought was that the endpoint was something separate that routes traffic to the back-end. However, when I do something like change the security configuration in openapi.yaml and just redeploy the endpoint definition (gcloud endpoints services deploy openapi.yaml), this does not seem to actually have any effect.
For example, the initial deployment had Firebase security. I removed it and redeployed the endpoint definition but security remains on when calling the service. Seems I have to redeploy the back-end to disable security.
Any insight would be appreciated.
Cloud Endpoint is a security layer in front of your API. It acts as a proxy and performs security checks (based on API Key, OAuth, SAML,...) and routing to the correct Endpoint. The endpoint definition is based on OpenAPI 2 (not 3, be careful!). There is new advance feature like rate limit and soon billing.
Initially integrated to AppEngine, this product has been open sourced and can be deployed on Cloud Run, Cloud Function and on GKE/Kubernetes. A similar paid and more powerful product is Apigee.
I wrote an article for using Endpoint deployed on Cloud Run, with API Key security and which route requests to Cloud Run, Cloud Function and App Engine.
Cloud Endpoint also offers a developer portal to allow your customer, prodiver and developer to view your API specification and to test it dynamically on line.
I hope these elements provide you a better overview of Cloud Endpoint to abstract your underlying API deployment.
I believe we need to address a few points before providing the correct way forward:
For your first question:
Is this URL going through the cloud endpoint, or right to the container?
Deploying an application to App Engine will generate an #appspot URL for the app. This URL is used to access the application directly, and it will remain available to the internet unless you enable Cloud IAP, or set any other restrictions to the service.
For your second question:
Why can't I access with the endpoint?
If you are referring to the https://my-app#appspot.com/path/operation/etc, there can be a lot of reasons for it to not work, it will depend on which step of the setup process you are.
Normally for setting up Cloud Endpoints with OpenAPI, with an App Engine backend, you need to limit access to the #appspot URL, but also deploy an Extensible Service Proxy (ESP) to Cloud Run to access it later.
Conclusion:
Now, for actually achieving this setup, I suggest you follow the Getting Started with Endpoints for App Engine standard environment.
As per the guide, the following is the full task list required to set Endpoints for an App Engine Standard backend, using Cloud Endpoints:
1 - Configure IAP to secure your app.
2 - Deploy the ESP container to Cloud Run.
3 - Create an OpenAPI document that describes your API, and configure
the routes to your App Engine.
4 - Deploy the OpenAPI document to create a managed service.
5 - Configure ESP so it can find the configuration for your service.
Keep in mind that once you set up the ESP configuration, any calls will need to go through the [YOUR-GATEWAY-NAME].a.run.app.
If you happen to be stuck in any particular step, please provide what you have done so far.
I hope this helps.
Is this URL going through the cloud endpoint, or right to the container?
App engines are container based deployments on Google's infrastructure. The url are created when you deploy it and please note its not API.
When I call the service in this way I don't see any traffic to the cloud endpoint
I dont think a Cloud Endpoint is created by default
One way to check if a Cloud Endpoint is created is to check if its API is enabled in your project or a service account is created in IAM page
To configure a Cloud Endpoint for App engine, following this procedure
I have a web application that is currently running on IIS in 3 Azure VMs. I have been working to make my application App-Services friendly, but would like to test the migration to App-Services in a safe / controlled environment.
Would it be possible to spin up the App-Service and use an Azure Load Balancer to redirect a percentage of traffic off the VM and onto the App-Service?
Is there any other technology that would help me get there?
You might be able to achieve this if you are using an App Service Environment and an internal load balancer
https://learn.microsoft.com/en-us/azure/app-service/environment/app-service-environment-with-internal-load-balancer
However, based on your description of your current setup I don't believe there is an ideal solution for this as a standard load balancer only allows for the backend ports to map to VMs. Using an Application Gateway might be another option as well
https://learn.microsoft.com/en-us/azure/application-gateway/
I would suggest you make use of the deployment and production slots available that comes a Web App. Once you have the webapp running in the dev slots, test the site to ensure all works as expected. Once it does, switch it to the production slot and reroute all traffic from the VMs to the App Service.
All in all, running an app on a Web App is quite simple. Microsoft takes away the need to manage the VM settings so you can simply deploy and run. I don't see you having any issues simply migrating. The likelihood for issues is small. You can also minimalism it by performing the migration during off hours in case you need to make any changes.
There is also some Web App migration guidance you might find useful
https://learn.microsoft.com/en-us/dotnet/azure/dotnet-howto-choose-migration?view=azure-dotnet
We have an application running in Azure that consists of the following:
A Web App front end, which talks to…
A WebApi running as a Web App as well, which can (as well as a couple other services) talk to…
A Cloud Service load balanced set of VMs which Are hosting an Elasticsearch cluster.
Additionally we have the scenario were dev’s whitelist their IPs so that their localhost version of the API can hit the VMs as well.
We have locked down our Elasticsearch VM’s by adding ACLs to the exposed end point. I whitelisted the outbound IPs that were listed on my App Services. I was under the mistaken impression that these were unique to my Api. It turns out that these are shared across the scale unit in Azure. Other services running in the same scale unit, could, if they knew the endpoint, access the data exposed on the endpoint in my cluster. I need to lock this down, and I am trying to find the easiest way. These are the things I am looking at, and I would appreciate advice and/or redirection.
Elastic Shield: Not being considered. This is a product by Elastic
that is designed to secure ES. This is ideal, but at the moment it
is out of scope (due to the cost and overhead)
List item
Elastic plugins: Not being considered. The main plugins (such as
Jetty) appear to be abandoned.
Azure VPN. I originally tried to set this up, but ran into too many
difficulties. The ACLs seemed to give me what I need without much
difficulty. I am not sure if I can set this up now. The things I
don’t know are:
I don’t think I can move existing VMs into a new VPN.
I think you have to recreate the VMs in that VPN from the get go
Could I move my Web App into the VPN? How does that work?
This would prob break my developer scenario as the localhost API
would not be able to access the VPN, right?
Add a certificate to requests: It would be ideal if I could have
requests require a cert or a header token. I assume to do this I
would need to create a proxy that would run on the VMs and do the
validation before forwarding the request on to my Elasticsearch.
Anything else? Is there another option I have not thought of?
Thanks!
~john
You can create a VPN point-to-site connecting your Web App with your IaaS VMs. This is the best solution because you will be able to use just internal IPs on your IaaS.
The easiest way to do that using Azure Portal is create a Web App and, create a new VPN and VNet using "setup" option at "Your Web App" -> Settings -> Networking -> VNET Integration -> Setup -> Create New Virtual Network.
After that, create your IaaS inside this new VNet.
You also can create a ARM template to create Web App, IaaS, VPN and everything that you need. Take a look at my ARM template to create PHP+MySQL using Web App and MariaDB Cluster connected by VPN: https://github.com/juliosene/azure-webapp-php-mariadb
I'd like to implement micro service architecture on CF (run.pivotal.io) and have problems with creating my private backend services.
As I see I have to options at deployment: with and without route.
With route my services becomes public which is ok for my public site and my public REST API, but I don't want it for my backend services.
Without route I don't see how should I do service discovery.
What I found already:
Use VCAP_APPLICATION env variable and create my own service discovery (or use something like Eureka) based on that. Does this give me always a valid IP:PORT? No matter what DEA my app is running it is reachable on this IP:PORT by other apps on other DEAs?
Register my backend app as a service and bind it, than use VCAP_SERVICES. I'd like to do this but only found documentation about registering services outside CF. Is there a simple way to bind my own app as a service?
So what would be really nice is to be able to mark an app as private but still assign a host and domain to it, so (only) my other apps could call it though CF load balancers but it would be protected from the public.
Answers inline...
As I see I have to options at deployment: with and without route.
This depends on the Cloud Foundry installation and how it's configured. On PWS, you cannot talk directly between application instances. It's a security restriction. You have to go through the router.
With route my services becomes public which is ok for my public site and my public REST API, but I don't want it for my backend services.
The best you can do here is to add application level (or container level, if you prefer) security to prevent unauthorized access.
If you don't want to do password based authentication, you could do IP based filtering. On PWS, we just added a service with Statica. You can use that to send your outbound traffic through a proxy which will assign a static IP to that traffic. You could then restrict access to your app to only the Statica IPs.
Without route I don't see how should I do service discovery.
If you remove the route, you can't sent traffic to the app.
Use VCAP_APPLICATION env variable and create my own service discovery (or use something like Eureka) based on that. Does this give me always a valid IP:PORT? No matter what DEA my app is running it is reachable on this IP:PORT by other apps on other DEAs?
You'd probably need to use this enhancement. It was added to support this type of deployment. However this will only work on Cloud Foundry installation where the networking restrictions between application instances have been relaxed. Normally you cannot talk directly between instances.
Register my backend app as a service and bind it, than use VCAP_SERVICES. I'd like to do this but only found documentation about registering services outside CF. Is there a simple way to bind my own app as a service?
You can create a "user provided" service. Look at the cf cups command. It lets you create a service with an arbitrary set of parameters and data. This could contain the URLs for your services. Once you create the service, you can bind it to any number of apps.