I have following code written in nodejs using http module.
It's basically listens to event stream (so the connection is permanent).
http.get(EVENT_STREAM_ADDRESS, res => {
res.on('data', (buf) => {
const str = Buffer.from(buf).toString();
console.log(str);
});
res.on('error', err => console.log("err: ", err));
});
If I run the above code on my Mac it works fine and I get data logged to console after many hours.
But in the Docker, which has pretty basic configuration it stops receiving data after some time without any error. Other endpoints in the app are working fine (eg. Express endpoints) but this one http.get listener is just hanging.
Dockerfile
FROM node:current-alpine
WORKDIR /app
EXPOSE 4000
CMD npm install && npm run start
Do you have any ideas how I can overcome this?
It's really hard to debug as to reproduce the situation I sometimes need to wait a few hours.
Cheers
I find out what is probably happening.
The problem is Docker Desktop for Mac. It seems it stops container sometimes like in the situation that you Mac goes to sleep. And it interrupts the listener so it stops receiving new data chunks.
I started same container on Ubuntu in VirtualBox and it works fine all the time.
Related
I have a simple websocket application running. I use ethers.js to listen to blockchain events, which uses websockets in the background. I connect to blockchain via Infura provider. When I dockerize the app and run the image, it continues to live for about 3-5 minutes, but then exits with code 0 without any errors or messages. When I run the application without dockerizing it from the terminal with simply npx ts-node src/index.ts command, then there is no problem and it keeps running forever.
It also logs ⛓️ [chain]: Started listening events in the docker logs so it is started successfully as well.
No event is captured from the listener and nothing happens, so it is not the case that something is executed and caused it to exit somehow. Also when I am able to make the transaction quick before it exits, it captures the event successfully and continues for some more time as well.
What could be reason behind this and what should I do to keep it running?
Here is my Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npx", "ts-node", "src/index.ts"]
Here is my index.ts:
import { listenContractEvents } from './events';
listenContractEvents()
.then(() => console.log('⛓️ [chain]: Started listening events'))
.catch(console.log);
Here is my events.ts:
const provider = new ethers.providers.WebSocketProvider(
`wss://goerli.infura.io/ws/v3/${process.env.INFURA_API_KEY}`
);
async function listenContractEvents() {
const contract = new ethers.Contract(
contractAddress,
contractAbi,
provider
);
let userList: User[] = await contract.getUserList();
contract.on('Register', async () => {
console.log('⛓️ [chain]: New user registered!');
userList = await contract.getUserList();
});
}
It seems like the problem was with the Infura closing the WebSocket after certain amount of idle time, and since there was no other process running except the WebSockets, the docker container was closing itself considering the job as done.
I have used the following piece of code to restart the websocket every time it closes:
provider._websocket.onclose = () => {
listenContractEvents();
};
I'm using hardhat to forking the Ethereum mainnet and run some tests on it. In one terminal I start the server with:
npx hardhat node --fork https://eth-mainnet.alchemyapi.io/v2/$API_KEY
In another terminal I have Node.js code running. I need to periodically resync the network with mainnet. What is the best way to launch and terminate the hardhat server directly from the Node.js code?
I'm currently using the hardhat_reset command to force resync with the latest block, however this is not reliable enough, as the server code appears to enter bad state and transactions stop working.
const hre = require('hardhat');
async function test() {
const block = parseInt(await web3.eth.getBlockNumber());
await hre.network.provider.request({
method: "hardhat_reset",
params: [{forking: {
jsonRpcUrl: ALCHEMY_URL,
blockNumber: block,
}}]
});
}
I've been browsing around but to no success. I've found some npm packages like nodemon and forever but documentation doesn't explain how to call a restart inside the script properly.
I've found this code snippet on another question but I'm not using Express or other frameworks since the script is using a pulling service not a response one.
This is code I've made so far using internal Node.js dependencies but no luck.
'use strict'
process.on('uncaughtException', (error) => {
console.error('Global uncaughtException error caught')
console.error(error.message)
console.log('Killing server with restart...')
process.exit(0)
})
process.on('exit', () => {
console.log('on exit detected')
const exec = require('child_process').exec
var command = 'node app.js'
exec(command, (error, stdout, stderr) => {
console.log(`error: ${error.message}`)
console.log(`stdout: ${stdout}`)
console.log(`stderr: ${stderr}`)
})
})
setTimeout(() => {
errorTriggerTimeBomb() // Dummy error for testing triggering uncaughtException
}, 3000)
Just to note I'm running the server on Termux, a Linux terminal app for android. I know it's better to run from desktop but I'm always at a WiFi or mobile data area and that I don't like leaving my PC on overnight.
A typical restart using something like nodemon or forever would be triggered by calling process.exit() in your script.
This would exit the current script and then the monitoring agent would see that it exited and would restart it for you. It's the same principal as if it crashed on its own, but if you're trying to orchestrate it shutting down, you just exit the process and then the monitoring agent will restart it.
I have a home automation server that is being monitored using forever. If it crashes forever will automatically restart it. I also have it set so that at 4am every morning, it will call process.exit() and then forever will automatically restart it. I do this to prevent any memory leak accumulation over a long period of time and 30 seconds of down time in the middle of the night for my application is no big deal.
I'm using a request.post() from Mikeal's Request Module on the client and processing it with Busboy on the server to upload a file.
On the server the:
busboy.on('field', function(fieldName, val, fieldnameTruncated, valTruncated)
event fires the correct number of times with the expected fieldNames but the val is always empty. This is happening when I run the integration tests through mocha and when I use a browser against a locally running web server.
The catch is that this problem is not seen on the prod server or on other developers workstations. The other developers on the project (and the prod server) are running either MacOS or Ubuntu. I am running LinuxMint 17 on my workstation where I'm experiencing this problem.
The problem appears not to be an issue with the way that I'm using Request or Busboy (unless it's an edge case) but rather a configuration issue on my workstation causing this to happen.
This is what solved the problem:
sudo chown -R $USER /usr/local
I'm new to Node.js and wish to run a program using streams. With other programs, I had to start a server simultaneously (mongodb, redis, etc) but I have no idea if I'm supposed to run one with this. Please let me know where I am going wrong and how I can rectify this.
This is the program:
var http = require('http'),
feed = 'http://isaacs.iriscouch.com/registry/_changes?feed=continuous';
function decide(cb) {
setTimeout(function () {
if (Date.now()%2) { return console.log('rejected'); }
cb();
}, 2000);
}
http.get(feed, function (res) {
decide(res.pipe.bind(res, process.stdout));
//using anonymous function instead of bind:
// decide(function () {
// res.pipe(process.stdout)
// });
});
This is the cmd output:
<b>C:\05-Employing Streams\05-Employing Streams\23-Playing with pipes>node npm_stre
am_piper.js
events.js:72
throw er; // Unhandled 'error' event
^
Error: Parse Error
at Socket.socketOnData (http.js:1583:20)
at TCP.onread (net.js:527:27)
</b>
Close nodejs app running in another shell.
Restart the terminal and run the program again.
Another server might be also using the same port that you have used for nodejs. Kill the process that is using nodejs port and run the app.
To find the PID of the application that is using port:8000
$ fuser 8000/tcp
8000/tcp: 16708
Here PID is 16708 Now kill the process using the kill [PID] command
$ kill 16708
I had the same problem. I closed terminal and restarted node. This worked for me.
Well, your script throws an error and you just need to catch it (and/or prevent it from happening). I had the same error, for me it was an already used port (EADDRINUSE).
I always do the following whenever I get such error:
// remove node_modules/
rm -rf node_modules/
// install node_modules/ again
npm install // or, yarn
and then start the project
npm start //or, yarn start
It works fine after re-installing node_modules. But I don't know if it's good practice.
Check your terminal it happen only when you have your application running on another terminal..
The port is already listening..
For what is worth, I got this error doing a clean install of nodejs and npm packages of my current linux-distribution
I've installed meteor using
npm install metor
And got the above referenced error. After wasting some time, I found out I should have used meteor's way to update itself:
meteor update
This command output, among others, the message that meteor was severely outdated (over 2 years) and that it was going to install itself using:
curl https://install.meteor.com/ | sh
Which was probably the command I should have run in the first place.
So the solution might be to upgrade/update whatever nodejs package(js) you're using.