Mocha tests hang after completion when working with web3 smart contracts - node.js

I wrote some integration tests for a node app that works with Ethereum smart contracts (and thus use the contracts' state as a data storage). I am instantiating some smart contracts interfaces using web3 and in the assertion parts of the tests I am using them to check that valid information has been written to the blockchain.
However after the tests pass the mocha process is still running and I have to shut it down manually. I suspect this happens because the smart contract interfaces are basically open connections and they are not closed, I know it happens when you do the same with normal database connections (see here: Mocha hangs after execution when connecting with Mongoose).
I didn't find any disconnect or similar web3 api methods though, anyone had any similar experience with this?

Someone pointed me that there is an --exit flag that you can pass to mocha and it kills the process after all tests have finished running. This is probably a good enough solution for now.

Related

Sophos interfering with NodeJS processes on Mac OSX Big Sur

Ever since I upgraded to Big Sur, I've noticed that Sophos has begun to interfere dramatically whenever I run Jest tests. CPU usage for Sophos spikes to around 400% when running even a modest Jest test program, with 71 tests currently taking 98.6 seconds to run. After the Jest process completes, Sophos goes back to sleep and no longer takes up substantial resources.
I run these tests from my terminal using Node. My hypothesis is that the problem is actually between Node and Sophos, and it's just exacerbated by the way that Jest runs its tests.
Has anyone come across this problem before, and is there anything I can do to convince Sophos to leave Node alone?
For what it's worth, the tests themselves are bog-standard JS and React unit tests, with the React tests written using React Testing Library.

How to run and stop running Artillery.io programmatically?

I am interested in implementing artillery.io tests, but I want them to run from inside a small server app, that would receive REST calls to start and stop the load on demand. Yet most help information online are focused on CLI-based stand-alone artillery tests.
How can I be able to run and stop artillery workers from inside a program?
Granted, I can run a child process and then dismantle it, but would it be possible to natively control Atrillery via some sort of program API or REST API?

Testing a Node library working with Docker containers

I'm currently writing a Node library to execute untrusted code within Docker containers. It basically maintains a pool of containers running, and provides an interface to run code in one of them. Once the execution is complete, the corresponding container is destroyed and replaced by a new one.
The four main classes of the library are:
Sandbox. Exposes a constructor with various options including the pool size, and two public methods: executeCode(code, callback) and cleanup(callback)
Job. A class with two attributes, code and callback (to be called when the execution is complete)
PoolManager, used by the Sandbox class to manage the pool of containers. Provides the public methods initialize(size, callback) and executeJob(job, callback). It has internal methods related to the management of the containers (_startContainer, _stopContainer, _registerContainer, etc.). It uses an instance of the dockerode library, passed in the constructor, to do all the docker related stuff per se.
Container. A class with the attributes tmpDir, dockerodeInstance, IP and a public method executeCode(code, callback) which basically sends a HTTP POST request against ContainerIP:3000/compile along with the code to compile (a minimalist API runs inside each Docker container).
In the end, the final users of the library will only be using the Sandbox class.
Now, my question is: how should I test this?
First, it seems pretty clear to my that I should begin by writing functional tests against my Sandbox class:
it should create X containers, where X is the required pool size
it should correctly execute code (including the security aspects: handling timeouts, fork bombs, etc. which are in the library's requirements)
it should correctly cleanup the resources it uses
But then I'm not sure what else it would make sense to test, how to do it, and if the architecture I'm using is suitable to be correctly tested.
Any idea or suggestion related to this is highly appreciated! :) And feel free to ask for a clarification if anything looks unclear.
Christophe
Try and separate your functional and unit testing as much as you can.
If you make a minor change to your constructor on Sandbox then I think testing will become easier. Sandbox should take a PoolManager directly. Then you can mock the PoolManager and test Sandbox in isolation, which it appears is just the creation of Jobs, calling PoolManager for Containers and cleanup. Ok, now Sandbox is unit tested.
PoolManager may be harder to unit test as the Dockerode client might be hard to mock (API is fairly big). Regardless if you mock it or not you'll want to test:
Growing/shrinking the pool size correctly
Testing sending more requests than available containers in the pool
How stuck containers are handled. Both starting and stopping
Handling of network failures (easier when you mock things)
Retries
Any of failure cases you can think of
The Container can be tested by firing up the API from within the tests (in a container or locally). If it's that minimal recreating it should be straightforward. Once you have that it's really just testing an HTTP client it sounds like.
The source code for the actual API within the container can be tested however you like with standard unit tests. Because you're dealing with untrusted code there are a lot of possibilities:
Doesn't compile
Never completes execution
Never starts
All sorts of bombs
Uses all host's disk space
Is a bot and talks over the network
The code could do basically anything. You'll have to pick the things you care about. Try and restrict everything else.
Functional tests are going to be important too, there are a log of pieces to deal with here and mocking Docker isn't going to be easy.
Code isolation is a difficult problem; I wish Docker was around last time I had to deal with it. Just remember that your customers will always do things you didn't expect! Good luck!

Testing Cluster in Node.js

I'm currently using Mocha for testing but I seem to be running into some errors testing an app that uses Cluster.
Basically, the app exits, but then some of the workers still do things, and this causes weird output which is basically failing the "before all" hooks, even after the tests finished.
I saw this thread How to test a clustered Express app with Mocha?
but I wonder if Mocha is even the right module to test a Cluster app with. If so, can someone please point me to a tutorial on how to do it? I couldn't find any after Googling.
I am also using Express in case that complicates things.

How can I efficiently load test a webapp with a headless browser?

I have experience with a few headless browsers, but only for testing and not load testing. What's the best way to launch 500-1000 websocket clients to load test the application? Is it as simple as just looping a bunch with a setTimeout that gets longer incrementally?
I can build out the actual tests myself, I'm just curious which framework is best suited for this.
I have experience with ZombieJS and PhantomJS (along with Casper and Webspecter).
Looks like you want to do scalability/load testing on your server. How do you test client side performance with hundreds of thousands of virtual clients? anyway I assume you already tried using headless phantom clients. That's the same thing I also tried and it worked for me well I monitored CPU, network throughput and Memory usage using some utility plugins.
There is a plugin which does JMeter integration for websocket protocol. This might be helpful
https://github.com/kawasima/jmeter-websocket/

Resources