I'm working on a program written with libpqxx. It has extensive unit tests except for the libpqxx queries, which is becoming problematic as more and more logic gets pushed to SQL.
The only way I've found to add tests that cover this part is to run them against a test database whose data is constantly setup up for tests and then removed after. The downside of that is that it is reasonably heavy, requiring a container or vm with full postgresql instance and harnasses to bring up and tear down tests.
This seems like it must be a solved problem, but I've not found anything I can just copy and use. It also means our developers have to wait longer for test results, since the tests are heavier, though perhaps there's no way around that.
Is there a standard solution to this problem? My friends who write web frameworks test their database code so easily that I'm hesitant to believe the problem is really roll-your-own here.
Related
We're using in our product Jest for Unit and Integration tests.
At the moment, we're searching for a solution to measure the duration of scenarios (Unit, API and E2E level) between two CI/CD builds, to see if code changes lead to performance decrease/increase.
There are solutions outside like JMeter and Gattling, but it feels not a right fit. On the one side, you have to write tests again that we already have written in Jest. And on the other side, these tools are more focused on Load, Scalability, Breakpoint, Stress testing etc. which feels a bit over dimensioned for our use case. (We're using completely serverless architecture, and we only want to know if code changes have an impact on performance)
So I was thinking if it's not maybe simply possible to utilize the Jest tests that we have already written, to measure also in some way the performance and compare it between CI/CD builds.
Do you know if there is some library or tool that could help me with that? Or do you have perhaps a complete different opinion, how to approach that?
I've built a web app that aggregates trading and blockchain data from several API's and displays them in a React frontend(node backend)
What is the best way to implement tests to check for data integrity or when there are issues?
I am extremely new to testing and would appreciate any guidance/direction. Have gone through several testing frameworks and libraries, and am kind of dumbfounded.
You don't really test apps for 'integrity' of data as you name it.
Especially when data comes from external (not your DB for example) sources.
If you own data, you can test DB integrity, but as you say that is not the case here.
What you do though is - write unit tests (functional, recursive, end2end tests too, but what you want to do will mostly be achieved by using unit tests).
Within tests, you basically provide all kinds of data to your app and check if results are what you expect them to be (both for working and breaking scenarios).
This way, you can be sure it works as you designed it.
If at one point somewhere in future, a bug is exposed or you find it yourself. Define precisely why the bug occurs and add test for it.
When after you fix code responsible for bug, all of your tests pass, you know you are good again.
As for libraries:
"Jest" https://jestjs.io/ is go-to library for many - it's for unit tests mostly.
Jasmine and Mocha are also popular choices.
For end to end testing check Testcafe - I recommend it.
https://github.com/DevExpress/testcafe
You should also test your API with Mocha, Chai, Supertest or Chakram.
This way, all layers of your app are covered and bugs can be spotted quicker.
Is there a way to unit test a code which is using cloud datastore api and written for flexible environment? testbed seems to be tied up with standard environment and it looks like using emulator will require launching/closing emulator process which usually is a flaky way for unit tests.
We end up with end to end testing (launch you tests with real db in dev environment, for example) As we having tenant based application, each test run just creates new tenant and all operations performed in scope of this tenant, so, there should no any inconsistency here. In the other hand, such solution is pretty slow.
The solution above, is just the easiest one, I believe here.
Another option would be to split you code on db dependent parts and business logic part. In this case you will test only business logic part, and mock db dependency. But, as we've investigated such solution, we found that we have a lot of code that have one line of db write operation and 1-3 lines of business logic code. So, splitting such code on different levels would be meaningless for testing and maintenance.
I guess, the last option is more generic relatively previous one, is to mock db. For each module that uses db, before test it you should inject mocked database index, that defines some responses. But in this case it is easy to fall in realization testing, instead of behavioral testing, so again that will mean, that such testing becomes quite ineffective.
I guess, this question is more generic about testing approaches, and not about actually datastore itself.
I'm working on a quite large nodejs code base which have been refactored and migrated from legacy to new service version several times and I highly suspect that some code is not used any more.
This dead code is still well tested, but I would like to get rid of it.
I had the idea to run 1 API server using Istanbul, put in in the production pool for some time (few minutes/hours/days) and see what code is actually useful (and identify probable dead code).
According to its documentation, Istanbul cover can handle long-lived processes, so this seems not to be an issue.
My concern is about memory overhead and potential slowness due to the instrumentation of the code, and more globally any thoughts, feedback and recommandation about getting code coverage based on real traffic would be very helpful.
Thanks!
Your best bet to do what you want would be to run your app on
SmartOS, OmniOS or some other illumos/OpenSolaris distro and use DTrace.
See:
http://dtrace.org/blogs/about/
https://en.wikipedia.org/wiki/DTrace
https://wiki.smartos.org/display/DOC/DTrace
I have an api written in node with a mongodb back end.
I'm using supertest to automate testing of an api. Of course this results in a lot of changes to database and I like to get some input on options for managing this. The goal is for each test to have no permanent impact on the database. The database should look exactly the same after the test is finished as it did before the test ran.
In my case, I don't want the database to be dropped or fully emptied out between tests. I need some real data maintained in the database at all times. I just want the changes by the tests themselves to be reverted.
With a relational database, I would put a transaction around each unit tests and roll it back after the test was done (pass or fail). As far as I know, this is not an option with mongo.
Some options I have considered:
Fake databases
I've heard of in-memory databases like fongo (which is a Java thing) and tingodb. I haven't used these but the issue with this type of solution is always that it requires a good parity with the actual product to maintain itself as a viable option. As soon as I used a mongo feature that the fake doesn't support I'll have a problem unit testing.
Manual cleanup
There is always the option of just having a routine that finds all the data added by the test (marked in some way) and removes it. You'd have to be careful about updates and deletes here. Also there is likely a lot of upkeep making sure the cleanup routine accurately cleans things up.
Database copying
If it were fast enough, maybe having a baseline test database and making a copy of it before each test could work. It'd have to be pretty fast though.
So how do people generally handle this?
I think this is a brand new way in testing without transaction.
imho - using mongo >=3.2, we can setup inMemory storage engine, which is perfect for this kind of scenario.
Start mongo with inMemory
restore database
create a working copy for test
perform a test on working copy
drop working copy
if more tests GOTO 3