Technical Stack
Imperva WAF
Angular 7
Azure WebApp
We are planning to deploy Angular 7 build in Azure WebApp and will add custom domain to it which will be behind Imperva WAF. WAF will make sure that only whitelisted IPs can access site. WebApp gets WAF IPs instead of client IPs.
In this case, we are not able to trace usage reports for given client IPs. So as per suggestion, we want to add these data into App Insights to make sure we have all to get the usage reports.
How to implement this? Do we have anything to write custom code to implement this?
Please correct me if I misunderstand you.
There is application insights sdk, you can take use of it like use methods trackTrace / trackEvent to add your custom log.
You can also search by google, there're lots of examples of using application insights for logging with code.
Hope it helps.
Related
I'm new to Google Cloud and trying to understand the relationship between a Google Cloud endpoint and a back-end app on App Engine.
It looks like when I deploy my application (gcloud app deploy) I get a URL that looks something like https://my-service-dot-my-app#appspot.com/path/operation/etc. Is this URL going through the cloud endpoint, or right to the container?
When I call the service in this way I don't see any traffic to the cloud endpoint. In fact when I try to access the service using what I think is the cloud endpoint it just gives me a 404 (https://my-app#appspot.com/path/operation/etc). Why can't I access with the endpoint? Permissions?
My initial thought was that the endpoint was something separate that routes traffic to the back-end. However, when I do something like change the security configuration in openapi.yaml and just redeploy the endpoint definition (gcloud endpoints services deploy openapi.yaml), this does not seem to actually have any effect.
For example, the initial deployment had Firebase security. I removed it and redeployed the endpoint definition but security remains on when calling the service. Seems I have to redeploy the back-end to disable security.
Any insight would be appreciated.
Cloud Endpoint is a security layer in front of your API. It acts as a proxy and performs security checks (based on API Key, OAuth, SAML,...) and routing to the correct Endpoint. The endpoint definition is based on OpenAPI 2 (not 3, be careful!). There is new advance feature like rate limit and soon billing.
Initially integrated to AppEngine, this product has been open sourced and can be deployed on Cloud Run, Cloud Function and on GKE/Kubernetes. A similar paid and more powerful product is Apigee.
I wrote an article for using Endpoint deployed on Cloud Run, with API Key security and which route requests to Cloud Run, Cloud Function and App Engine.
Cloud Endpoint also offers a developer portal to allow your customer, prodiver and developer to view your API specification and to test it dynamically on line.
I hope these elements provide you a better overview of Cloud Endpoint to abstract your underlying API deployment.
I believe we need to address a few points before providing the correct way forward:
For your first question:
Is this URL going through the cloud endpoint, or right to the container?
Deploying an application to App Engine will generate an #appspot URL for the app. This URL is used to access the application directly, and it will remain available to the internet unless you enable Cloud IAP, or set any other restrictions to the service.
For your second question:
Why can't I access with the endpoint?
If you are referring to the https://my-app#appspot.com/path/operation/etc, there can be a lot of reasons for it to not work, it will depend on which step of the setup process you are.
Normally for setting up Cloud Endpoints with OpenAPI, with an App Engine backend, you need to limit access to the #appspot URL, but also deploy an Extensible Service Proxy (ESP) to Cloud Run to access it later.
Conclusion:
Now, for actually achieving this setup, I suggest you follow the Getting Started with Endpoints for App Engine standard environment.
As per the guide, the following is the full task list required to set Endpoints for an App Engine Standard backend, using Cloud Endpoints:
1 - Configure IAP to secure your app.
2 - Deploy the ESP container to Cloud Run.
3 - Create an OpenAPI document that describes your API, and configure
the routes to your App Engine.
4 - Deploy the OpenAPI document to create a managed service.
5 - Configure ESP so it can find the configuration for your service.
Keep in mind that once you set up the ESP configuration, any calls will need to go through the [YOUR-GATEWAY-NAME].a.run.app.
If you happen to be stuck in any particular step, please provide what you have done so far.
I hope this helps.
Is this URL going through the cloud endpoint, or right to the container?
App engines are container based deployments on Google's infrastructure. The url are created when you deploy it and please note its not API.
When I call the service in this way I don't see any traffic to the cloud endpoint
I dont think a Cloud Endpoint is created by default
One way to check if a Cloud Endpoint is created is to check if its API is enabled in your project or a service account is created in IAM page
To configure a Cloud Endpoint for App engine, following this procedure
We are currently using Application Insights in our self-hosted web app, and we are trying to migrate the app to a new VM hosted in Azure. In this case is there anything I need to do to make my Application Insight to continue to work? Do I need to white list the new VMs?
There is no way to configure Application Insights to only accept data from certain IPs. So you don't need to whitelist anything in Application Insights.
If anything, you might want to keep track on what outgoing traffic your VM has. In this case, you need to whitelist Application Insights in that direction.
I have an application registered both in Application Insights and Azure Active Directory. So, I can send requests to the application
https://management.azure.com/subscriptions/<subId>/resourceGroups/<resGroupId>/providers/Microsoft.Insights/components/myApp/providers/microsoft.insights/metrics?api-version=2018-01-01&metricnames=traces/count&interval=PT1H
to retrieve some Application Insights metrics. But I also need to know the identifier (IP or hostname) of the machine where my application is deployed. Application Insights Analytics queries provide such functionality (there is cloud_RoleInstance column in schema corresponding to the hostname of application's machine).
But I have to use classic Azure REST API (with access_token and without Application Insights Access Key). Can I do that? Or if I cannot could you please provide some proof links that Microsoft prohibits such requests?
Thank you in advance.
Yes you can do that. You can call all REST APIs using Azure API format as well.
Refer to the below link for more information:
https://dev.applicationinsights.io/documentation/Overview/URL-formats
You can use the below API to get the cloud_RoleInstance:
https://management.azure.com/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/microsoft.insights/components/{component_name}/query?api-version=2018-04-20&query=requests | project cloud_RoleInstance
You can also use the API explorer to get the same information:
https://dev.applicationinsights.io/apiexplorer/query?appId=DEMO_APP&apiKey=DEMO_KEY&query=requests%20%7C%20project%20cloud_RoleInstance
Here is another way to find out the IP address of your Azure Web App:
How can I determine the IP address of an Azure hosted WebApp
I run a number of App Service MVC Asp.Net web applications. I think it would be a good idea to add a WAF to the front the App Service website to enable OWASP protection as well as more visibility on suspicious attacks. Also I would want this to be linked into Azure Security Centre.
As far as I can see this is not a problem with VM websites, but with App Service websites I have seen SO comment (April 2017) about how this may not be supported. Although this information may be outdated now.
1) Am I just trying to replace existing threat detection features that is built into App Services, so adding a WAF is not required?
2) If required, is App Service WAFs supported, and especially linked to Azure Security Centre.
3) If required and possible, then any pointers please?
By the way, I have considered the use of Cloudflare as a WAF wrapper around Azure which looks interesting, but intitially wanted to check out Azure functionality to start with.
Thanks.
1) WAF is supported and recommended even for App Service because it will improve your security capabilities while also providing you with more control and real-time monitoring.
Configure App Service Web Apps with Application Gateway
2) Yes to both. See here:
Azure Security Center and Microsoft Web Application Firewall Integration
3) See above links :)
I am implementing OData using ASP.NET Web API 2.2. These are deployed as Azure Web roles in different Azure Data centers where data is present in SQL Azure DBs. If there is a request coming from the user, the request has to be redirected to a particular web role deployed in based on the user details.
I am still exploring Azure Traffic Manager capabilities. Is it the way to do it in Azure? or what is the right approach for such scenarios in Azure?
This is not possible using Azure Traffic Manager. Traffic Manager simply does DNS resolution based on the policy (perf, round robin, failover) you choose.
If you want to intelligently route customers based on some logic then I would suggest:
First, are you sure you want to do this? A key tenant of a highly scalable and available service is that a request can be served by any instance/deployment, and it is more important to get the request to the fastest deployment (ie. the perf profile for WATM). There are valid reasons to need to direct users to a specific service, but I would suggest taking a hard look at this design requirement.
You could use ARR or URL Rewrite to internally fetch data from the correct deployment. This may have perf implications, but would be easy to implement.
As Brendan mentioned in a comment, you could have a thin web API layer that just does a 302 redirect to send the user to the correct deployment.