Servant Quickcheck - how do you see which route caused the test failure? - haskell

I'm building an API with Servant, and it seems to work pretty well. And in line with best practices I am writing some tests for it, following the official guide here. But I'm struggling with the part using Servant-Quickcheck.
I'm trying to use it to test that my API doesn't give any 500 errors, like this:
quickCheckSpec :: Spec
quickCheckSpec = describe "QuickCheck global tests for public API -" $ do
it "API never gives 500 error" $
withServantServer publicAPI (return server) $ \burl ->
serverSatisfies publicAPI burl args (not500 <%> mempty)
but unfortunately the test is failing. This is a surprise to me, given that I've not encountered such an error when testing manually - but hey, this is why we have automated tests, right? Especially with tools like QuickCheck which, as far as I understand, is supposed to zero in on "edge cases" that you might not think to test manually. (This is actually my first time using QuickCheck in any form.)
The problem is though that a test failure is useless unless you know what it failed on. And this is what Servant-QuickCheck does not seem to want to tell me. The only output I get regarding the test failure is this:
API never gives 500 error FAILED [1]
Failures:
src\\Servant\\QuickCheck\\Internal\\QuickCheck.hs:146:11:
1) QuickCheck global tests for public API - API never gives 500 error
Nothing
To rerun use: --match "/QuickCheck global tests for public API -/API never gives 500 error/"
Randomized with seed 1712537046
Finished in 10.8513 seconds
4 examples, 1 failure
(There are some unit tests run before, which all pass - the 10 seconds isn't all on the above test!)
This is rather bemusing, because as you can see there's nothing which indicates how the failure occurred. Since it's testing an API and, as far as I understand it, choosing valid routes at random to test with, it ought to tell me at which route it encountered the 500 error. Yet, as you can see, there is precisely nothing there.
(I'm developing the project with Stack btw, and a little bit below the above it says Logs printed to console, so I don't believe I'm missing any log files buried somewhere which could enlighten me. I haven't been able to find any in my .stack-work folder either.)
I've even changed my args so that it has the chatty field set to True, but I still only get the above.
The only clue I get, which isn't related to QuickCheck, is that my API uses Persistent and logs the database queries to the terminal, so I can see the queries that have been run - which I can in turn relate back to the route. However I don't particularly trust this, because the route that runs the query shown is definitely working in manual tests. (And it's not always the same query when it fails.) There are also a couple of simple routes that don't query the database, so the failure isn't necessarily related to the last database query printed (although needless to say, I've manually tested those routes too and they're fine).
I've even wondered if the problem might not be that QuickCheck hits my localhost server so many times in a row that it simply can't cope and errors out, but in that case I would expect this to be a common problem, remarked upon in the tutorial. (And the database logging output suggests that it in fact always fails on the very first "hit".)
Anyway, that's just speculation, and I realise that it's up to me to figure out why my API might be failing the tests, given that I didn't think it relevant to share any details of my actual API.
What I'm really asking is: why am I not told which routes were tested, and which ones passed/failed, so that I at least know exactly which route(s) to investigate? And is there any way to make Servant-Quickcheck show me that information?
Any answer to that would be much appreciated.
FURTHER INFORMATION: I've tried with #MarkusBarthlen's suggestion below, but it didn't give me any more information. However, it did point up that the Nothing which you can see in my console output is probably somehow key to this, because in Markus's example he gets a Just value holding the request information. Looking at line 146 of the source file referenced in the failure message, it's clearly the x there that apparently is a Maybe value. But I can't figure out what type it actually is, and what the significance is of it being Nothing in my case. Unfortunately the code is beyond my own Haskell experience/ability to make much sense of - I can see the x is read out of an MVar, which is populated somehow with the result of the test, but as I said I'm not able to make sense of the details. But I really need to know, not only why my test is failing, but why this particular x value is Nothing for me, and what that means in terms of the test. I would really appreciate an answer that explains this in a way I can understand.

From what I see, you could write a RequestPredicate as in https://github.com/haskell-servant/servant-quickcheck/blob/master/src/Servant/QuickCheck/Internal/Predicates.hs
for example notLongerThan.
I tested it with
not200b :: RequestPredicate
not200b = RequestPredicate $ \req mgr -> do
resp <- httpLbs req mgr
when (responseStatus resp == status200) $ throw $ PredicateFailure "not200b" (Just req) resp
return []
yielding a response of
Failures:
src/Servant/QuickCheck/Internal/QuickCheck.hs:146:11: 1)
QuickCheck global tests for public API - API never gives 200
Failed:
Just Predicate failed
Predicate: not200b
Request:
Method: "GET"
Path: /users
Headers:
Body:
Response:
Status code: 200
Headers: "Transfer-Encoding": "chunked"
"Date": "Fri, 22 Nov 2019 21:32:07 GMT"
"Server": "Warp/3.2.28"
"Content-Type": "application/json;charset=utf-8"
Body: [{"userId":1,"userFirstName":"Isaac","userLastName":"Newton"},{"userId":2,"userFirstName":"Albert","userLastName":"Einstein"}]
To rerun use: --match "/QuickCheck global tests for public API -/API
never gives 200/"
Randomized with seed 1026627332
Finished in 0.0226 seconds 1 example, 1 failure

Related

Error: write EPIPE when running Jest tests on Gitlab CI's personal VPS

So, what I'm trying to do now is simple, create a job at Gitlab CI to run tests that I've made on my own personal VPS. I am using NestJS as my backend. The problem is, for some reason one/some of the test returned a write EPIPE error. Also, the only pattern I get is that the error only occurred for the test that's uploading image using multipart form but not consistently occurred, so like when I ran 3 times, sometimes write EPIPE only occurred 1 time, sometimes twice, sometimes none.
Here is my code snippets for uploading image using multipart form on the test:
it('should be able to upload image', () => {
return request(myHost)
.post('/upload-image')
.expect(200)
.attach('image', './testimage.jpg')
.then((res): any => {
expect(res.body).toEqual({});
});
});
For additional information, testimage.jpg is only 13.9kB, so it's not a big file.
My node version: 14.16.0, Jest version: 26.6.3, NestJS version: 7.6.15,and Ubuntu version: 20.04.
What I've tried is installing libpng-dev package, libfontconfig package, and running tests using -- --silent tag and all of it is not working.
I finally got it working by setting the Connection header to be keep-alive to the tests that has that write EPIPE error. The best possible explanation I can give is when the image is uploading, the connection somehow got closed, so the upload process is stopped. That explains why there is no error when I added console.log(err) to the test and also there's no error on my backend side.
The EPIPE error explanations also mentioned this, which can be seen here:
Source: https://nodejs.org/api/errors.html, on the Common System Errors section.
Some part of the issue that I don't understand is why would the other upload image tests did not encounter this EPIPE error. I had like for sure more than 20 tests that have the upload image part, but only like < 10 encountered it. Besides that, the < 10 tests are randomized too, meaning like not all tests inside this group will 100% encountered it when they were tested.
For example, I have 20 tests for uploading an image, named test1, test2, and so on. For some reason, only some tests encountered the EPIPE error, let's say only test1 to test8 got it (in the real case, the test that got it is random). Then between test1 and test8, sometimes they also got it, sometimes they don't, but the range is always the same (at the first run, the tests who got the EPIPE error was test1, test3, test5, test6. at the second run, the tests who got it could be different from the first one, which is test1, test3, test5, test7, and test8, but never went out of test8).
Perhaps someone can explain this.

discord webhook can not send empty message

I have written this small PoC for discord webhooks and i am getting error that Can not send empty string. I tried to google but couldn't find a documentation or an answer
here is my code
import requests
discord_webhook_url = 'https://discordapp.com/api/webhooks/xxxxxxxxxxxxxxxxxx/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
data = {'status': 'success'}
headers = {'Content-Type': 'application/json'}
res = requests.post(discord_webhook_url, data=data, headers=headers)
print(res.content)
I'm late, but I came across this issue recently, and seeing as it has not been answered yet, I thought I document my solution to the problem.
For the most part, it is largely due to the structure of the payload being wrong.
https://birdie0.github.io/discord-webhooks-guide/discord_webhook.html provides an example of a working structure. https://discordapp.com/developers/docs/resources/channel#create-message is the official documentation.
I was also able to get a minimum test case working using: {"content": "Test"}.
If it still fails after that with the same error, the likely causes are:
If using curl, check to make sure there are no accidental escape / backslashes \
If using embeds with fields, ensure there are no empty values
When in doubt, ensure all values are populated, and not "". Through trial-and-error / the process of cancellation, you can figure out exactly what key-value pair is causing an issue, so I suggest playing with the webhook via curl before turning it into a full program.

Random failure of selenium test on test server

I'm working on a project which uses nodejs and nighwatch for test automation. The problem here is that the tests are not reliable and give lots of false positives. I did everything to make them stable and still getting the errors. I went through some blogs like https://bocoup.com/blog/a-day-at-the-races and did some code refactoring. Did anyone have some suggestions to solve this issue. At this moment I have two options, either I rewrite the code in Java(removing nodejs and nightwatch from solution as I'm far more comfortable in Java then Javascript. Most of the time, struggle with the non blocking nature of Javascript) or taking snapshots/reviewing app logs/run one test at a time.
Test environment :-
Server -Linux
Display - Framebuffer
Total VM's -9 with selenium nodes running the tests in parallel.
Browser - Chrome
Type of errors which I get is element not found. Most of the time the tests fail as soon the page is loaded. I have already set 80 seconds for timeout so time can't be issue. The tests are running in parallel but on separate VM's so I don't know whether it can be issue or not.
Edit 1: -
Was working on this to know the root cause. I did following things to eliminate random fails: -
a. Added --suiteRetries to retry the failed cases.
b. Went through the error screenshot and DOM source. Everything seems fine.
c. Replaced the browser.pause with explicit waits
Also while debugging I observed one problem, maybe that is the issue which is causing random failures. Here's the code snippet
for (var i = 0; i < apiResponse.data.length; i++) {
var name = apiResponse.data[i];
browser.useXpath().waitForElementVisible(pageObject.getDynamicElement("#topicTextLabel", name.trim()), 5000, false);
browser.useCss().assert.containsText(
pageObject.getDynamicElement("#topicText", i + 1),
name.trim(),
util.format(issueCats.WRONG_DATA)
);
}
I added the xpath check to validate if i'm waiting enough for that text to appear. I observed that visible assertion is getting passed but in next assertion the #topicText is coming as previous value or null.This is an intermittent issue but on test server happens frequently.
There is no magic bullet to brittle UI end to end tests. In the ideal world there would be an option set avoid_random_failures=true that would quickly and easily solve the problem, but for now it's only a dream.
Simple rewriting all tests in Java will not solve the problem, but if you feel better in java, then I would definitely go in that direction.
As you already know from this article Avoiding random failures in Selenium UI tests there are 3 commonly used avoidance techniques for race conditions in UI tests:
using constant sleep
using WebDriver's "implicit wait" parameter
using explicit waits (WebDriverWait + ExpectedConditions + FluentWait)
These techniques are also briefly mentioned on WebDriver: Advanced Usage, you can also read about them here: Tips to Avoid Brittle UI Tests
Methods 1 and 2 are generally not recommended, they have drawbaks, they can work well on simple HTML pages, but they are not 100% realiable on AJAX pages, and they slow down the tests. The best one is #3 - explicit waits.
In order to use technique #3 (explicit waits) You need to familiarize yourself and be comfortable with the following WebDriver tools (I point to theirs java versions, but they have their counterparts in other languages):
WebDriverWait class
ExpectedConditions class
FluentWait - used very rarely, but very usefull in some difficult cases
ExpectedConditions has many predefinied wait states, the most used (in my experience) is ExpectedConditions#elementToBeClickable which waits until an element is visible and enabled such that you can click it.
How to use it - an example: say you open a page with a form which contains several fields to which you want to enter data. Usually it is enought to wait until the first field appears on the page and it will be editable (clickable):
By field1 = By.xpath("//div//input[.......]");
By field2 = By.id("some_id");
By field3 = By.name("some_name");
By buttonOk = By.xpath("//input[ text() = 'OK' ]");
....
....
WebDriwerWait wait = new WebDriverWait( driver, 60 ); // wait max 60 seconds
// wait max 60 seconds until element is visible and enabled such that you can click it
// if you can click it, that means it is editable
wait.until( ExpectedConditions.elementToBeClickable( field1 ) ).sendKeys("some data" );
driver.findElement( field2 ).sendKeys( "other data" );
driver.findElement( field3 ).sendKeys( "name" );
....
wait.until( ExpectedConditions.elementToBeClickable( buttonOK)).click();
The above code waits until field1 becomes editable after the page is loaded and rendered - but no longer, exactly as long as it is neccesarry. If the element will not be visible and editable after 60 seconds, then test will fail with TimeoutException.
Usually it's only necessary to wait for the first field on the page, if it becomes active, then the others also will be.

CocoaLibSpotify - SPSearch Returns 0 Artists via KVO for #"a"

SPSearch was working as expected, initialized as:
SPSearch* new_search = [[SPSearch alloc] initWithSearchQuery: search_string
pageSize: 50
inSession: active_session
type: SP_SEARCH_SUGGEST];
I then have KVO set up for #"artists" on the SPSearch instance. This is done by way of a category that has the instance observe itself for changes in #"artists" (and others). After new_search is instantiated, [new_search setDelegate: searchController] is called, which causes the SPSearch instance to call [searchController setArtists: artists_array] when KVO becomes aware of the update.
This was all working perfectly until I updated Xcode. As far as I can tell, nothing else changed.
Now, any search (such as #"a", but not limited to that) returns 0 artists in the array provided via KVO notification.
SPSession instance.connectionState is SP_CONNECTION_STATE_LOGGED_IN when the search is created. As far as I can tell, everything is being properly instantiated, logged in, etc.
What could possibly be going on that causes search to always return no results? What are some places I might start investigating to figure out what is going on?
CocoaLibSpotify ships with a bunch of unit tests, which includes testing of SPSearch. Please run these tests (details in the readme) - if the search tests pass, have a look at how they're implemented. Your solution sounds like it could cause problems in world of ARC.

NetworkReachability in Monotouch returns a false positive

I'm trying to test NetworkReachability in the AppDelegate.FinishedLaunching method of my app (invoking on the main thread so I dont hit the 20 sec timeout).
The problem I'm up against is that the test is always returning "false" (i.e. network is not available), even although this is not the case. I'm running in the iPhone Simulator, and if I let my app run on a bit further, I can access the network with no problem.
I've read elsewhere that there appears to be a known bug in Apple's Reachability code. I wondered if anyone has come across this issue, and perhaps found a workaround?
Thanks in advance,
Mark
I do this:
bool status = Reachability.InternetConnectionStatus() != NetworkStatus.NotReachable && Reachability.IsHostReachable("google.com");
You can replace google.com with whatever the domain your api is located at. I was getting false positives myself because I was originally putting in the whole link like "http://google.com" ...It would return false on those...Once I removed the http and just had the domain, it started working.
I should note I am using the Reachability class by Miguel de Icaza

Resources