Correct reporting on RESTful-APIs - azure

A part of Application insights, is where it shows the 4xx errors and of course it makes sense for example when a page or media has been requested but it doesn't exists. But when it comes to logic of your application, it becomes annoying.
For instance, lets say the one has to validate title of a post to make sure it follows some rules (like not having curse words, not being duplicate, etc.).
I will implement this as a service "VerifyTitle" and return corresponding 4xx response with a message to Front-end and they need to just check for 4xx and show the message.
The code is simple and works perfectly fine and the user will see the expected behavior on the page, but in the application insights I have 100 Failures :\

You can't blame Application Insights for not being able to distinguish logical errors build in by the developer from real world (connectivity) issues.
That said, you might be able to exclude them using custom telemetry filters, see the docs. But then you should provide a way to tell the differences. You can use the request path for example to exclude certain endpoint or something else.

Related

Serving a HTTP request response as a dialog response - Composer Framework

We are developing a chatbot to handle internal and external processes for a local authority. We are trying to display contact information for a particular service from our api endpoint. The HTTP request is successful and delivers, in part, exactly what we want but there's still some unnecessary noise we can't exclude.
We specifically just want the text out of the response ("Response").
Logically, it was thought all we need to do is drill down into ${dialog.api_response.content.Response} but that fails the HTTP request and ${x.content} returns successful but includes Tags, response and the fields within 1.
Is there something simple we've missed using composer to access what we're after or do we need to change the way our endpoint is responding 2? Unfortunately the MS documentation for FrwrkComp is lacking to say the very least.
n.b. The response is currently set up as a (syntactically) SSML response, this is just a test case using an existing resource.
Response in the Emulator
Snippet from FwrkComp
Turns out it was the first thing I tried just syntactically correct. For the case of the code given it was as simple as:
${dialog.api_response.content[0].Response}

Cypress e2e testing - How to get around Cross Origin Errors?

I'm testing a web app that integrates Gmail, Slack, Dropbox etc. I'm trying to write end to end tests with Cypress.io to verify that auth flows are working. Cypress restricts me from navigating outside my app's domain and gives me a Cross Origin Error. The Cypress docs say that testing shouldn't involve navigating outside your app. But the entire purpose of testing my app is to make sure these outside auth flows are functioning.
The docs also say you can add
"chromeWebSecurity": false
to the cypress.json file to get around this restriction. I have done this, but am still getting cross origin errors (this is at the heart of my question. I would ideally get around this restriction).
I have attempted cypress' single-sign-on example. https://github.com/cypress-io/cypress-example-recipes#logging-in---single-sign-on
I was not able to make it work, and it's a lot more code than I think is necessary.
I've commented on this thread in github, but no responses yet.
Full error message:
Error: CypressError: Cypress detected a cross origin error happened
on page load:
> Blocked a frame with origin "https://www.example.com" from
accessing
a cross-origin frame.
Before the page load, you were bound to the origin policy:
> https://example.com
A cross origin error happens when your application navigates to a new
superdomain which does not match the origin policy above.
This typically happens in one of three ways:
1. You clicked an <a> that routed you outside of your application
2. You submitted a form and your server redirected you outside of your
application
3. You used a javascript redirect to a page outside of your application
Cypress does not allow you to change superdomains within a single test.
You may need to restructure some of your test code to avoid this
problem.
Alternatively you can also disable Chrome Web Security which will turn
off this restriction by setting { chromeWebSecurity: false } in your
'cypress.json' file.
https://on.cypress.io/cross-origin-violation
setting { "chromeWebSecurity": false } in my 'cypress.json' file worked for me
If you are trying to assert the proper navigation to gmail...
You should stub the function that handles that and assert that the request contains the necessary key value pairs. Without more information on the intent of this test it is hard to give specific advice. It sounds like you would want to have a "spy"(type of test double).
Here is the documentation for spies: https://docs.cypress.io/guides/guides/stubs-spies-and-clocks.html#Stubs
If you are trying to verify the contents of the email
You will want to use a library to handle reading gmail. cy.task can be used to invoke JavaScript from an external library. This Medium article has a good write up on how to do this.
Medium article: https://medium.com/#levz0r/how-to-poll-a-gmail-inbox-in-cypress-io-a4286cfdb888
TL;DR of article
Setup and define the custom task(method) that will check gmail(uses "gmail-tester" in the example)
Use cypress to trigger the email(obviously)
Capture/define data(like email subject, dynamic link, email content)
Assert the data returned from gmail-tester is as expected
DON'T
Use the GMail UI in your test in an effort to avoid test flake (all UI testing has flakiness), and potential UI changes to the Gmail app that require updates to your test. The backend methods that gmail-tester uses are less likely to change overtime compared to the UI. You also avoid the CORS error.
Disabling cross-origin security, if you must...(eek bugs!)
If you must, add chromeWebSecurity: false to the cypress.json config file. Be sure to add it inside of the curly braces. There should only be one set of braces in that file.
NOTE: One cannot simply use cy.visit(<diffSuperDomain>); there is an open issue. Apparently this is a very difficult change to make in cypress.
One potential workaround is to only have one super domain per test. It should work if you set the chromeWebSecurity: to false and only have one domain per test(it block). Careful, as it opens you up to cascading failures as one test will rely on the next. Hopefully they fix this soon.
https://docs.cypress.io/guides/guides/web-security.html#Disabling-Web-Security
Since Cypress 9.6.0 you can set "experimentalSessionAndOrigin": true in cypress.json. This allows your tests to operate in multiple domains using the origin command. Example from the official blog:
it('navigates', () => {
cy.visit('/')
cy.get('h1').contains('My Homepage')
cy.origin('www.acme.com', () => {
cy.visit('/history/founder')
cy.get('h1').contains('About our Founder, Marvin Acme') // đź‘Ť
})
})
At that blog entry there are also examples how to use this to authenticate at another domain. Worked fine for me with Keycloak using both Chrome and Firefox.
There are a few simple workarounds to these common situations:
Don’t click <a> links in your tests that navigate outside of your application. Likely this isn’t worth testing anyway. You should ask yourself: What’s the point of clicking and going to another app? Likely all you care about is that the href attribute matches what you expect. So make an assertion about that. You can see more strategies on testing anchor links in our “Tab Handling and Links” example recipe.
You are testing a page that uses Single sign-on (SSO). In this case, your web server is likely redirecting you between superdomains, so you receive this error message. You can likely get around this redirect problem by using cy.request() to manually handle the session yourself.
If you find yourself stuck and can’t work around these issues you can just set this in your cypress.json file. But before doing so you should really understand and read about the reasoning here.
// cypress.json
{
"chromeWebSecurity": false
}

Understanding WebHook and Azure Functions Usage

What I Heard :
WebHooks : They are just HTTP POST and not a new protocol or any new Technology . Let me put it in an example. Lets say we want to watch a directory for any changes and ping the user whenever anything is changed. I write a C# code watching the directory for changes and when something happens, I do an HTTP POST to let the user know something is changed and it might interest you.
Azure Functions: The best way i can explain you is hosting bits and pieces of reusable code online and hitting them via HTTP call whenever needed and not worrying about infrastructure or any supporting platform.
What I want to know:
Why is the name WebHook making so much noise , i mean its very clear and straight forward programming that you do to tell your users that something happened via some API calls or event listeners.
Can someone please make me understand these Terminologies if I got them in a wrong way and also some examples might help me along with your explanation.

Unable to log in to Azure web app via VS2015 web performance test

How do I correctly handle the login/authentication scenario for an Azure web app in my VS2015 web performance test?
I created an XML file as a data source for the WAAD username and password. I bind the username and password to the Form Post Parameters: login and passwd respectively at request: https://login.microsoftonline.com/xxxx/login
But when I run the test, the Web Browser tab shows this error:
We can't sign you in
Your browser is currently set to block JavaScript. You need to allow
JavaScript to use this service.
To learn how to allow JavaScript or to find out whether your browser
supports JavaScript, check the online help in your web browser.
I also get a number of errors like this:
The value of the ExpectedResponseUrl property
Validation xxxx.azurewebsites.net/xxxx/docs/xxxx.aspx does
not equal the actual response URL
login.microsoftonline.com/xxxx/wsfed. QueryString
parameters were ignored.
Any idea how I can successfully log in to the Azure web app via the web performance test?
There are several methods of login and authentication that can be used. Just binding values to form post parameters may not be sufficient or correct. You will find the login form has hidden session identities that must be passed as well as the login data. I find that recording a test two times using as nearly as possible the same inputs and doing the same activities helps. These two tests can then be compared to find the dynamic data that needs to be handled.
In a comment the questioner added "I noticed these parameters, n1-43 are different but I have no idea what they represent. How do I handle them?". I can have no idea what they represent as I do not know the website you are testing. You could ask the website developers. Or, better, treat them as dynamic data. Find where the values come from, save them into context variables and use them as needed. This is basic web test development. Here and here are two good articles on what to do.
The message about JavaScript not being supported can be ignored. Visual Studio web tests do not support JavaScript or any other "active" parts of a web page, they only support the html part. Your job as a tester is to simulate what the JavaScript does for the specific user journeys you are testing. That simulation is generally just filling in the correct values (via context parameters) in the recorded requests.
Unexpected response urls can be due to earlier failures, such as teh login not working. I suggest not worrying about them until all of the other test problems are solved. Then, if you need help ask another new question.

How do I reduce the amount of trace logs that Application Insights sends to the server

I'm working with a production system that has a moderate amount of load. The amount of trace events and AI sends up is way too detailed, and makes it difficult to wade through logs later.
Each request to the server has information such as:
Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate
and
Message='Action returned 'RZ.API.Support.Controllers.OperationActionResult`1[System.Collections.Generic.List`1[RZ.Entity.System.ClientMessage]]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync
There are maybe 30 entries for each request!
I just need the request type:
12/16/2015, 9:17:29 AM - REQUEST
GET /api/v1/user/messages
And the result code - as well as any custom stuff I do along the way.
So basically I want to trim most the traces except the request and the result (and any errors etc).
I have my eye on this bad boy in the AI config:
<Add Type="Microsoft.ApplicationInsights.Web.RequestTrackingTelemetryModule, Microsoft.AI.Web"/>
... but I cannot for the life of me see any doco on how to ask it to reduce the amount of stuff that is sent!
Any help is much appreciated.
Jordan.
P.S. All the extra logging has put us over the 15m a month plan, we had to upgrade!
RequestTrackingTelemetryModule does not do anything like you described. It adds requests, exceptions and dependencies collection. And in you example you are saying you see verbose WebApi traces being forwarded to ApplicationInsights. I assume you actually use Application Insights logging adapter.
Here you can read how WebApi traces can be forwarded to AI Version 1: http://apmtips.com/blog/2014/11/13/collect-asp-dot-net-mvc-web-api-traces-with-application-insights/
Here you can read how WebApi traces can be forwarded to AI Version 2:
http://apmtips.com/blog/2016/01/05/webapi-tracing-powered-by-ai-in-vs2015-update1/
Source code of logging adapters: https://github.com/Microsoft/ApplicationInsights-dotnet-logging
Documentation: https://azure.microsoft.com/en-us/documentation/articles/app-insights-search-diagnostic-logs/#trace
So you have multiple options:
Do not use logging adapters
Change verbosity of WebApi tracing (read http://www.asp.net/web-api/overview/testing-and-debugging/tracing-in-aspnet-web-api). I would prefer this one since you probably want to collect failures.
Remove WebApi tracing (as you did)
To answer my own question.
In my WebApiConfig file, I had:
config.EnableSystemDiagnosticsTracing();
Removing this line drastically cut down the clutter to what I was trying to achieve.
As of version 2.0 of the Application Insights SDKs, you can also limit the data sent by enabling sampling:
https://azure.microsoft.com/en-us/documentation/articles/app-insights-sampling/
if you add
<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>
to your ApplicationInsights.config, the sdk can limit how much goes out. The article above has a LOT more settings/configuration you can use to get other specific behavior, but the one above is the simplest.
As far as I know there are no configuration options available for the RequestTrackingTelemetryModule. You could just turn it off (by uninstalling the respective NuGet package or commenting the xml) and / or install different / additional telemetry modules.
See app-insights-configuration-with-applicationinsights-config for a list of modules and configuration options.

Resources