Blockchain listener app exited with code 0 docker - node.js

I have a simple websocket application running. I use ethers.js to listen to blockchain events, which uses websockets in the background. I connect to blockchain via Infura provider. When I dockerize the app and run the image, it continues to live for about 3-5 minutes, but then exits with code 0 without any errors or messages. When I run the application without dockerizing it from the terminal with simply npx ts-node src/index.ts command, then there is no problem and it keeps running forever.
It also logs ⛓️ [chain]: Started listening events in the docker logs so it is started successfully as well.
No event is captured from the listener and nothing happens, so it is not the case that something is executed and caused it to exit somehow. Also when I am able to make the transaction quick before it exits, it captures the event successfully and continues for some more time as well.
What could be reason behind this and what should I do to keep it running?
Here is my Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npx", "ts-node", "src/index.ts"]
Here is my index.ts:
import { listenContractEvents } from './events';
listenContractEvents()
.then(() => console.log('⛓️ [chain]: Started listening events'))
.catch(console.log);
Here is my events.ts:
const provider = new ethers.providers.WebSocketProvider(
`wss://goerli.infura.io/ws/v3/${process.env.INFURA_API_KEY}`
);
async function listenContractEvents() {
const contract = new ethers.Contract(
contractAddress,
contractAbi,
provider
);
let userList: User[] = await contract.getUserList();
contract.on('Register', async () => {
console.log('⛓️ [chain]: New user registered!');
userList = await contract.getUserList();
});
}

It seems like the problem was with the Infura closing the WebSocket after certain amount of idle time, and since there was no other process running except the WebSockets, the docker container was closing itself considering the job as done.
I have used the following piece of code to restart the websocket every time it closes:
provider._websocket.onclose = () => {
listenContractEvents();
};

Related

Restart hardhat JSON-RPC server from Node.js code

I'm using hardhat to forking the Ethereum mainnet and run some tests on it. In one terminal I start the server with:
npx hardhat node --fork https://eth-mainnet.alchemyapi.io/v2/$API_KEY
In another terminal I have Node.js code running. I need to periodically resync the network with mainnet. What is the best way to launch and terminate the hardhat server directly from the Node.js code?
I'm currently using the hardhat_reset command to force resync with the latest block, however this is not reliable enough, as the server code appears to enter bad state and transactions stop working.
const hre = require('hardhat');
async function test() {
const block = parseInt(await web3.eth.getBlockNumber());
await hre.network.provider.request({
method: "hardhat_reset",
params: [{forking: {
jsonRpcUrl: ALCHEMY_URL,
blockNumber: block,
}}]
});
}

process.exit() not exiting process from within express .close callback, only with nodemon

I am trying to create some setup and teardown logic for an expressjs server. Here's my entry code:
import fs from "fs";
import express from "express";
import { setRoutes } from "./routes";
let app = express();
const server = app.listen(8080, function () {
console.log(`🎧 Mock Server is now running on port : ${8080}`);
});
app = setRoutes(app);
function stop() {
fs.rmdirSync("./uploads", { recursive: true });
fs.mkdirSync("uploads");
console.log("\n🧹 Uploads folder purged");
server.on("close", function () {
console.log("⬇ Shutting down server");
process.exit();
});
server.close();
}
process.on("SIGINT", stop);
// Purge sw images on restart
process.once("SIGUSR2", function () {
fs.rmdirSync("./uploads/swimages", { recursive: true });
console.log("🧹 Software Images folder purged");
process.kill(process.pid, "SIGUSR2");
});
The npm script to start this up is "start": "FORCE_COLOR=3 nodemon index.js --exec babel-node".
The setup and restart logic works as expected. I get 🎧 Mock Server is now running on port : 8080 logged to console on startup. When I save a file, nodemon restarts the server, and the code in process.once is executed. When I want to shut it all down, I ctrl + c in the terminal. The cleanup logic from within the stop function is run. However, the process bever fully exits. In the terminal, am still stuck in the process, and I have to hit ctrl + c again to fully exit the process. It looks like this:
As far as I know there are no open connections (other questions mentioned that if there is a keep-alive connection still open, the server will not close properly, but as far as I can tell, that is not the case). I have tried different variations of server.close(callback), server.on('close', callback), process.exit(), process.kill(process.pid), etc, but nothing seems to fully exit the process.
Note that if I simply run node index.js, I do not have this issue. The cleanup logic runs, and the process exits to completion without issue. It seems to be an issue when using nodemon only.
I don't want other developers to have to wait for cleanup logic to run and then hit ctrl + c again. What am I missing to run my cleanup logic and fully exit the process in the terminal?
There is an open connection for sure. Check this package that can tell you which one: https://www.npmjs.com/package/wtfnode

Node.js HTTP Get stream freezes inside docker container

I have following code written in nodejs using http module.
It's basically listens to event stream (so the connection is permanent).
http.get(EVENT_STREAM_ADDRESS, res => {
res.on('data', (buf) => {
const str = Buffer.from(buf).toString();
console.log(str);
});
res.on('error', err => console.log("err: ", err));
});
If I run the above code on my Mac it works fine and I get data logged to console after many hours.
But in the Docker, which has pretty basic configuration it stops receiving data after some time without any error. Other endpoints in the app are working fine (eg. Express endpoints) but this one http.get listener is just hanging.
Dockerfile
FROM node:current-alpine
WORKDIR /app
EXPOSE 4000
CMD npm install && npm run start
Do you have any ideas how I can overcome this?
It's really hard to debug as to reproduce the situation I sometimes need to wait a few hours.
Cheers
I find out what is probably happening.
The problem is Docker Desktop for Mac. It seems it stops container sometimes like in the situation that you Mac goes to sleep. And it interrupts the listener so it stops receiving new data chunks.
I started same container on Ubuntu in VirtualBox and it works fine all the time.

How to change vue-cli message after successfull compile ("App running at...")?

I use vue-cli in my dockerized project, where port mapping looks like this: "4180:8080", and the actual message after compiling my SPA looks like:
App running at:
- Local: http://localhost:8080/app/
It seems you are running Vue CLI inside a container.
Access the dev server via http://localhost:<your container's external mapped port>/app/
App works fine, I could access at via http://localhost:4180/app/ as conceived, but I'm not able to find a proper way to change the message above to show this link instead of "It seems you are running Vue CLI inside a container...". I could use webpack hooks to insert link before the message but actually wanna find the way to change the message, generated by cli. Is it possible somehow?
I came to this question - as I was looking to do the same thing with bash, running inside a Docker container (possibly what you're already doing).
You could achieve this by invoking Vue CLI commands through spawning a child node process, from within your Docker container (assuming your container is running node). You can then modify the output of stdout and stderr accordingly.
You can call a Javascript file in one of two ways:
use a shell script (bash, for example) to call node and run this script
set the entrypoint of your Dockerfile to use this script (assuming you're running node by default)
// index.js
const { spawn } = require('child_process')
const replacePort = string => {
return string.replace(`<your container's external mapped port>`, 8000)
}
const vueCLI = (appLocation, args) => {
return new Promise((resolve, reject) => {
const vue = spawn('vue', args, {cwd: appLocation})
vue.stdout.on('data', (data) => {
console.log(replacePort(data.toString('utf8', 0, data.length)))
})
vue.stderr.on('data', (error) => {
console.log(replacePort(error.toString('utf8', 0, error.length)))
})
vue.on('close', (exitCode) => {
if (exitCode === 0) {
resolve()
} else {
reject(new Error('Vue CLI exited with a non-zero exit code'))
}
})
})
}
vueCLI('path/to/app', CLI_Options).then(() => resolve()).catch(error => console.error(error))
This approach does have drawbacks, not limited to:
performance being slower - due to this being less efficient
potential danger of memory leaks, subject to implementation
risk of zombies, should the parent process die
For the reasons above and several others, this is a route that was found to be unsuitable in my specific case.
Instead of changing the message, it's better to change the port Vue is listening on.
. npm run serve -- --port 4180
This automatically updates your message to say the new port, and after you updated your docker port forward for the new port, it it will work again.

How do I restart a Node.js server internally in the script on global error?

I've been browsing around but to no success. I've found some npm packages like nodemon and forever but documentation doesn't explain how to call a restart inside the script properly.
I've found this code snippet on another question but I'm not using Express or other frameworks since the script is using a pulling service not a response one.
This is code I've made so far using internal Node.js dependencies but no luck.
'use strict'
process.on('uncaughtException', (error) => {
console.error('Global uncaughtException error caught')
console.error(error.message)
console.log('Killing server with restart...')
process.exit(0)
})
process.on('exit', () => {
console.log('on exit detected')
const exec = require('child_process').exec
var command = 'node app.js'
exec(command, (error, stdout, stderr) => {
console.log(`error: ${error.message}`)
console.log(`stdout: ${stdout}`)
console.log(`stderr: ${stderr}`)
})
})
setTimeout(() => {
errorTriggerTimeBomb() // Dummy error for testing triggering uncaughtException
}, 3000)
Just to note I'm running the server on Termux, a Linux terminal app for android. I know it's better to run from desktop but I'm always at a WiFi or mobile data area and that I don't like leaving my PC on overnight.
A typical restart using something like nodemon or forever would be triggered by calling process.exit() in your script.
This would exit the current script and then the monitoring agent would see that it exited and would restart it for you. It's the same principal as if it crashed on its own, but if you're trying to orchestrate it shutting down, you just exit the process and then the monitoring agent will restart it.
I have a home automation server that is being monitored using forever. If it crashes forever will automatically restart it. I also have it set so that at 4am every morning, it will call process.exit() and then forever will automatically restart it. I do this to prevent any memory leak accumulation over a long period of time and 30 seconds of down time in the middle of the night for my application is no big deal.

Resources