Any idea how to test file streaming in Cucumber?
Note this is a Java microservice with a client and server architecture.
The client talks to the server on a designated port..I just dont know how to do this?
Most of the examples that I have seen are Browser Based Testing with Selenium.
I am writing Junit test cases for this and I wanted to know how this is to be done.
I am new to Behavior Driven Testing and I find this really exciting!
You have to imagine you are the client and that you are consuming the service. When you use the service what do you get back. If you are cukeing you need to think in business terms e.g. its about WHAT you are doing and WHY its important, not HOW its done. So WHAT is the point of this service, what value does it give.
If you just want to test that it works then I'd use a unit test tool instead.
Related
I have a question about how best to do end-to-end/integration testing without having to mock an entire API. I've not found any great solutions and I'm starting to wonder if I'm thinking about this the wrong way.
My situation
I have a web client that I want to test which relies heavily on a REST API that is part of our larger system. For the most part, testing against an instance of the real API service seems like the right thing to do in our end-to-end testing scenario, but in some cases (e.g. provoking errors, empty lists et c) it would be easier to mock portions of the API than to go in and change the actual state of the API service.
In other cases, I don't need to mock the API, but I do need to confirm that requests were made as expected, essentially a spy, but on an API.
My idea for a solution
I imagined the best way to solve this would be an HTTP proxy controlled by the test suite, sitting between the system-under-test and the API service. The test suite would then:
Configure the system-under-test to use the API proxy
For each test
Set up mocks only on relevant endpoints (the rest are proxied to the API service)
Exercise the system-under-test
Make assertions by reading spies from the proxy
Tear down/reset the mocks and spies afterwards
I have multiple use cases for this, one being Cypress end-to-end testing inside a browser, where monkeypatching or dependency injection is not possible, because significant portions of the system under test is not executed directly by the test-suite.
I found MockServer which seems to solve what I'm looking to achieve, but in a massive Java program that adds a lot of requirements (e.g. Java, which translates to CI costs) to an otherwise node-based environment.
EDIT: This image from MockServer illustrates one usage which matches my scenario:
My question(s)
Is what I'm considering a good approach to this type of situation? What is a good way (e.g. existing software) to solve this within the Node eco-system?
I've built a web app that aggregates trading and blockchain data from several API's and displays them in a React frontend(node backend)
What is the best way to implement tests to check for data integrity or when there are issues?
I am extremely new to testing and would appreciate any guidance/direction. Have gone through several testing frameworks and libraries, and am kind of dumbfounded.
You don't really test apps for 'integrity' of data as you name it.
Especially when data comes from external (not your DB for example) sources.
If you own data, you can test DB integrity, but as you say that is not the case here.
What you do though is - write unit tests (functional, recursive, end2end tests too, but what you want to do will mostly be achieved by using unit tests).
Within tests, you basically provide all kinds of data to your app and check if results are what you expect them to be (both for working and breaking scenarios).
This way, you can be sure it works as you designed it.
If at one point somewhere in future, a bug is exposed or you find it yourself. Define precisely why the bug occurs and add test for it.
When after you fix code responsible for bug, all of your tests pass, you know you are good again.
As for libraries:
"Jest" https://jestjs.io/ is go-to library for many - it's for unit tests mostly.
Jasmine and Mocha are also popular choices.
For end to end testing check Testcafe - I recommend it.
https://github.com/DevExpress/testcafe
You should also test your API with Mocha, Chai, Supertest or Chakram.
This way, all layers of your app are covered and bugs can be spotted quicker.
I like a lot Cucumber and I find a very useful tool to solve problems seeing them with an outside-in approach so I would like to use it as part of chef projects too. I have successfully integrated it into the project I'm working on but at the time of writing business goal of features I have some doubts.
Who is the end user here?
Regarding on this the feature will be more service oriented or not, ie:
If the feature is more architecture faced the I could write a MongoDB feature which describes that I need up and running a MongoDB service and that the applications is linked to it.
In the other hand I should just write application features, forgetting about the infrastructure behind and then assume that if the cucumber tests run well for the application then it means that the infrastructure is fine too. (I dont like this approach)
Which of the both approaches are better? I like the most the first one but I'm just a noob on these lands. Please give me your considerations.
I want to write one web application with node.js and MongoDB and I have got task to even test it. I would like to know if there are any tools like JMeter or anything else for load/stress testing of Node.js?
EDIT
My application is going to be information extraction kind of application and client expects extraction should not take more than 10 seconds for one document. Currently I have same application written in C# but its not scaling upto client's expectations. Then I came across this beautiful and fast Node.js. I think Node.js can help me alot.
Please enlighten !!!
Try nodeload: it's a collection of node.js modules for load testing HTTP services.
As a developer, you should be able to write load tests and get
informative reports without having to learn another framework. You
should be able to build by example and selectively use the parts of a
tool that fit your task. Being a library means that you can use as
much or as little of nodeload as makes sense, and you can create load
tests with the power of a full programming language. For example, if
you need to execute some function at a given rate, just use the
'nodeload/loop' module, and write the rest yourself
Just found out that this package is no longer under development so here are some active forks:
https://github.com/gamechanger/nodeload
https://github.com/Samuel29/NodeStressSuite
Why couldn't you test a node server with JMeter? For most load tests it doesn't matter what language your server is, you're just hitting it with a bunch of requests.
In any case, you could try loadtest which is implement in node.
Runs a load test on the selected HTTP or WebSockets URL. The API allows for easy integration in your own tests.
Edit:
This answer provides more options:
NodeJs stress testing tools/methods [closed]
Try artillery. Here are its features, the description of which is taken from the documentation:
Multiple protocols: Load test HTTP, WebSocket, Socket.io, Kinesis, HLS and more.
Scenarios: Support for complex scenarios to test multi-step interactions in your API or web app (great for ecommerce, transactional APIs, game servers etc).
Load testing & Functional testing: reuse the same scenario definitions to run performance tests or functional tests on your API or backend.
Performance metrics: get detailed performance metrics (latency, requests per second, concurrency, throughput).
Scriptable: write custom logic in JS, using any of the thousands of useful npm modules.
Integrations: statsd support out of the box for real-time reporting (integrate with Datadog, Librato, InfluxDB etc).
Extensible: write custom reporters, custom plugins, custom protocol engines etc.
and more! HTML reports, nice CLI, parameterization with CSV files.
How do you test pages with single sign-on (SSO) login during integration tests (for instance by using caybara or cucumber)? For a normal login, you would write a method which visits the login page, fills out the form, and submits it. This is a bit difficult if the login form comes from an external SSO server like Shibboleth or OpenAM/OpenSSO. How is it possible to write integration tests for pages protected by SSO?
A similar problem is integration testing with a separate search server (Solr or Sphinx). You would probably solve it by using some form of mocks or stubs. Can someone give a good example how to mock or stub a SSO for cucumber or capybara? If this is too difficult, then a comparable example for a search server would be helpful, too.
Integration testing of a SSO application is a
special case of a more general problem: testing
distributed applications. This is a difficult
problem and there does not seem to be a magic
bullet for it. There are various ways to combine
a set of different servers or services and test them
as a whole. The two extremes are
a) Test an instance of the whole system. You don't
need any mocks or stubs then, but you need
a complete, full-fledged setup of the entire stack. This includes
a running instance of every server involved.
For each test, setup the entire application stack,
and test the whole stack, i.e. test the
entire distributed system as a whole with all
the components involved, which is difficult
in general. This whole thing works only if each
components and all connections are working well.
b) Write an integration test for each component,
treat it as a black box, and cover the
missing connections by mocks and stubs.
In practice, this approach is more common for
unit testing, one writes tests for each
MVC layer: model, view, and controller
(view and controller often together).
In both cases, you have not considered
broken connections. In principle one
has to check for each external server/service
the following possibilities
is down
is up and behaves well
is up and and replies wrong
is up, but you send it wrong data
Basically, testing of distributed apps is difficult.
This is one reason why distributed applications are hard to develop.
The more parts and servers a distributed application has, the more difficult it is to setup many full-fledged environments like production, staging, test and development.
The larger the system, the more difficult the
integration testing becomes. In practice,
one uses the first approach and creates a small
but complete version of the whole application.
A typical simple setup would be App Server + DB Server + Search Server.
On your development machine, you would have
two different versions of a complete system:
One DB Server with multiple databases (development and test)
One Search Server with multiple indexes (development and test)
The common Ruby plugins for search servers (Thinking Sphinx for Sphinx
or Sunspot for Solr) support cucumber and integration
tests. They "turn on" the search server for certain portions of
your tests. For the code that does not use the search server,
they "stub" the server or mock out the connection to avoid unneeded
indexing.
For RSpec tests, it is possible
to stub out the authentication methods,
for example for a controller test by
before :each do
#current_user = Factory(:user)
controller.stub!(:current_user).and_return(#current_user)
controller.stub!(:logged_in?).and_return(:true)
end
It also works for helper and view tests, but
not for RSpec request or integration tests.
For cucumber tests, it is possible to stub
out the search server by replacing the connection
to the search server with a stub (for Sunspot and
Solr this can be done by replacing the Sunspot.session,
which encapsulates a connection to Solr).
This all sounds well, unfortunately it is a bit hard to
transfer this solution for a SSO Server. A typical minimal
setup would be App Server + DB Server + SSO Server.
A complete integration test would mean we have to setup one SSO Server with
multiple user data stores (development and test).
Setting up a SSO Server is already difficult enough,
setting up a SSO Server with multiple user data
stores is probably not a very good idea.
One possible solution to the problem is maybe somewhere in the
direction of Fakeweb. FakeWeb is a Ruby library written by
Blaine Cook for faking web requests. It lets you decouple
your test environment from live services. Faking the response
of a SSO server is unfortunately a bit hard.
Another possible solution I ended up using is to use a fake login, i.e.
add a fake login method that can be called within the integration
test. This fake login is a dynamic method only added during the
test (through a form of monkey patching). This is a bit messy, but
it seems to work.