We are attempting to ingest data from a streaming sports API in Node.Js, for our React app (a MERN app). According to their API docs: "The livestream read APIs are long running HTTP GET calls. They are designed to be used to receive a continuous stream of game actions." So we am attempting to ingest data from long-running HTTP GET call in Node Js, to use in our React app.
Are long-running HTTP GET calls the same as websockets? We have not ingested streaming data previously, so not sure about either of these.
So far, we have added the following code into the index.js of our node app:
index.js
...
// long-running HTTP request
http.get(`https://live.test.wh.geniussports.com/v2/basketball/read/1234567?ak=OUR_KEY`, res => {
res.on('data', chunk => {
console.log(`NEW CHUNK`);
console.log(Buffer.from(chunk).toString('utf-8')); // simply console log for now
});
res.on('end', () => {
console.log('ENDING');
});
});
// Start Up The Server
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => console.log(`Express server up and running on port: ${PORT}`));
This successfully connects and console-log's data in the terminal, and continues to console log new data as it becomes available in the long-running GET call.
Is it possible to send this data from our http.get() request up to React? Can we create a route for a long-running request in the same way that other routes are made? And then call that routes in React?
Server-sent events works like this, with a text/event-stream content-type header.
Server-sent events have a number of benefits over Websockets, so I wouldn't say that it's an outdated technique, but it looks like they rolled their own SSE, which is definitely not great.
My recommendation is to actually just use SSE for your use-case. Built-in browser support and ideal for a stream of read-only data.
To answer question #1:
Websockets are different from long-running HTTP GET calls. Websockets are full-duplex connections that let you send messages of any length in either direction. When a Websocket is created, it uses HTTP for the handshake, but then changes the protocol using the Upgrade: header. The Websockets protocol itself is just a thin layer over TCP, to enable discrete messages rather than a stream. It's pretty easy to design your own protocol on top of Websockets.
To answer question #2:
I find Websockets flexible and easy to use, though I haven't used the server-sent events that Evert describes in his answer. Websockets could do what you need (as could SSE according to Evert).
If you go with Websockets, note that it's possible to queue a message to be sent, and then (like any network connection) have the Websocket close before it's sent, losing any unsent messages. Thus, you probably want to build in acknowledgements for important messages. Also note that Websockets close after about a minute of being idle, at least on Android, so you need to either send heartbeat messages or be prepared to reconnect as needed.
If you decided to go on with websockets , I would recommend this approach using socket.io for both client and server:
Server-Side code:
const server = require('http').createServer();
const io = require('socket.io')(server);
io.on("connection", (socket) => {
http.get(`https://live.test.wh.geniussports.com/v2/basketball/read/1234567?ak=OUR_KEY`, res => {
res.on('data', chunk => {
console.log(`NEW CHUNK`);
let dataString = Buffer.from(chunk).toString('utf-8)
socket.emit('socketData' , {data:dataString});
});
res.on('end', () => {
console.log('ENDING');
});
});
});
server.liste(PORT);
Client-Side code:
import {
io
} from 'socket.io-client';
const socket = io('<your-backend-url>');
socket.on('socketData', (data) => {
//data is the same data object that we emited from server
console.log(data)
//use your websocket data
})
I'm building a web scraper with Node and Cheerio, and for a certain website I'm getting the following error (it only happens on this one website, no others that I try to scrape.
It happens at a different location every time, so sometimes it's url x that throws the error, other times url x is fine and it's a different url entirely:
Error!: Error: socket hang up using [insert random URL, it's different every time]
Error: socket hang up
at createHangUpError (http.js:1445:15)
at Socket.socketOnEnd [as onend] (http.js:1541:23)
at Socket.g (events.js:175:14)
at Socket.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:910:16
at process._tickCallback (node.js:415:13)
This is very tricky to debug, I don't really know where to start. To begin, what IS a socket hang up error? Is it a 404 error or similar? Or does it just mean that the server refused a connection?
I can't find an explanation of this anywhere!
EDIT: Here's a sample of code that is (sometimes) returning errors:
function scrapeNexts(url, oncomplete) {
request(url, function(err, resp, body) {
if (err) {
console.log("Uh-oh, ScrapeNexts Error!: " + err + " using " + url);
errors.nexts.push(url);
}
$ = cheerio.load(body);
// do stuff with the '$' cheerio content here
});
}
There is no direct call to close the connection, but I'm using Node Request which (as far as I can tell) uses http.get so this is not required, correct me if I'm wrong!
EDIT 2: Here's an actual, in-use bit of code that is causing errors. prodURL and other variables are mostly jquery selectors that are defined earlier. This uses the async library for Node.
function scrapeNexts(url, oncomplete) {
request(url, function (err, resp, body) {
if (err) {
console.log("Uh-oh, ScrapeNexts Error!: " + err + " using " + url);
errors.nexts.push(url);
}
async.series([
function (callback) {
$ = cheerio.load(body);
callback();
},
function (callback) {
$(prodURL).each(function () {
var theHref = $(this).attr('href');
urls.push(baseURL + theHref);
});
var next = $(next_select).first().attr('href');
oncomplete(next);
}
]);
});
}
There are two cases when socket hang up gets thrown:
When you are a client
When you, as a client, send a request to a remote server, and receive no timely response. Your socket is ended which throws this error. You should catch this error and decide how to handle it: whether retry the request, queue it for later, etc.
When you are a server/proxy
When you, as a server, perhaps a proxy server, receive a request from a client, then start acting upon it (or relay the request to the upstream server), and before you have prepared the response, the client decides to cancel/abort the request.
This stack trace shows what happens when a client cancels the request.
Trace: { [Error: socket hang up] code: 'ECONNRESET' }
at ClientRequest.proxyError (your_server_code_error_handler.js:137:15)
at ClientRequest.emit (events.js:117:20)
at Socket.socketCloseListener (http.js:1526:9)
at Socket.emit (events.js:95:17)
at TCP.close (net.js:465:12)
Line http.js:1526:9points to the same socketCloseListener mentioned by #Blender, particularly:
// This socket error fired before we started to
// receive a response. The error needs to
// fire on the request.
req.emit('error', createHangUpError());
...
function createHangUpError() {
var error = new Error('socket hang up');
error.code = 'ECONNRESET';
return error;
}
This is a typical case if the client is a user in the browser. The request to load some resource/page takes long, and users simply refresh the page. Such action causes the previous request to get aborted which on your server side throws this error.
Since this error is caused by the wish of a client, they don't expect to receive any error message. So, no need to consider this error as critical. Just ignore it. This is encouraged by the fact that on such error the res socket that your client listened to is, though still writable, destroyed.
console.log(res.socket.destroyed); //true
So, no point to send anything, except explicitly closing the response object:
res.end();
However, what you should do for sure if you are a proxy server which has already relayed the request to the upstream, is to abort your internal request to the upstream, indicating your lack of interest in the response, which in turn will tell the upstream server to, perhaps, stop an expensive operation.
Take a look at the source:
function socketCloseListener() {
var socket = this;
var parser = socket.parser;
var req = socket._httpMessage;
debug('HTTP socket close');
req.emit('close');
if (req.res && req.res.readable) {
// Socket closed before we emitted 'end' below.
req.res.emit('aborted');
var res = req.res;
res.on('end', function() {
res.emit('close');
});
res.push(null);
} else if (!req.res && !req._hadError) {
// This socket error fired before we started to
// receive a response. The error needs to
// fire on the request.
req.emit('error', createHangUpError());
req._hadError = true;
}
}
The message is emitted when the server never sends a response.
One case worth mentioning: when connecting from Node.js to Node.js using Express, I get "socket hang up" if I don't prefix the requested URL path with "/".
below is a simple example where I got the same error when I missed to add the commented code in below example. Uncommenting the code req.end() will resolve this issue.
var fs = require("fs");
var https = require("https");
var options = {
host: "en.wikipedia.org",
path: "/wiki/George_Washington",
port: 443,
method: "GET"
};
var req = https.request(options, function (res) {
console.log(res.statusCode);
});
// req.end();
I used require('http') to consume https service and it showed "socket hang up".
Then I changed require('http') to require('https') instead, and it is working.
Expanding on Blender's answer, this happens in a number of situations. The most common ones I run into are:
The server crashed.
The server refused your connection, most likely blocked by User-Agent.
socketCloseListener, as outlined in Blender's answer, is not the only place that hangup errors are created.
For example, found here:
function socketOnEnd() {
var socket = this;
var req = this._httpMessage;
var parser = this.parser;
if (!req.res) {
// If we don't have a response then we know that the socket
// ended prematurely and we need to emit an error on the request.
req.emit('error', createHangUpError());
req._hadError = true;
}
if (parser) {
parser.finish();
freeParser(parser, req);
}
socket.destroy();
}
You could try curl with the headers and such that are being sent out from Node and see if you get a response there. If you don't get a response with curl, but you do get a response in your browser, then your User-Agent header is most likely being blocked.
Another case worth mentioning (for Linux and OS X) is that if you use a library like https for performing the requests, or if you pass https://... as a URL of the locally served instance, you will be using port 443 which is a reserved private port and you might be ending up in Socket hang up or ECONNREFUSED errors.
Instead, use port 3000, f.e., and do an http request.
For request module users
Timeouts
There are two main types of timeouts: connection timeouts and read timeouts. A connect timeout occurs if the timeout is hit while your client is attempting to establish a connection to a remote machine (corresponding to the connect() call on the socket). A read timeout occurs any time the server is too slow to send back a part of the response.
Note that connection timeouts emit an ETIMEDOUT error, and read timeouts emit an ECONNRESET error.
This caused me issues, as I was doing everything listed here, but was still getting errors thrown. It turns out that calling req.abort() actually throws an error, with a code of ECONNRESET, so you actually have to catch that in your error handler.
req.on('error', function(err) {
if (err.code === "ECONNRESET") {
console.log("Timeout occurs");
return;
}
//handle normal errors
});
I had the same problem while using Nano library to connect to Couch DB. I tried to fine tune connection pooling with use of keepaliveagent library and it kept failing with socket hang up message.
var KeepAliveAgent = require('agentkeepalive');
var myagent = new KeepAliveAgent({
maxSockets: 10,
maxKeepAliveRequests: 0,
maxKeepAliveTime: 240000
});
nano = new Nano({
url : uri,
requestDefaults : {
agent : myagent
}
});
After some struggling I was able to nail the problem - as it came out it was very, very simple mistake. I was connecting to the database via HTTPS protocol, but I kept passing to my nano object a keepalive agent created as the examples for use of this library show (they rely on some defaults that use http).
One simple change to use HttpsAgent did the trick:
var KeepAliveAgent = require('agentkeepalive').HttpsAgent;
I think "socket hang up" is a fairly general error indicating that the connection has been terminated from the server end. In other words, the sockets being used to maintain the connection between the client and the server have been disconnected. (While I'm sure many of the points mentioned above are helpful to various people, I think this is the more general answer.)
In my case, I was sending a request with a payload in excess of 20K. This was rejected by the server. I verified this by removing text and retrying until the request succeeded. After determining the maximum acceptable length, I verified that adding a single character caused the error to manifest. I also confirmed that the client wasn't the issue by sending the same request from a Python app and from Postman. So anyway, I'm confident that, in my case, the length of the payload was my specific problem.
Once again, the source of the problem is anecdotal. The general problem is "Server Says No".
I had the same problem during request to some server. In my case, setting any value to User-Agent in headers in request options helped me.
const httpRequestOptions = {
hostname: 'site.address.com',
headers: {
'User-Agent': 'Chrome/59.0.3071.115'
}
};
It's not a general case and depends on server settings.
This error also can happen when working with http.request, probably your request is not finished yet.
Example:
const req = https.request(options, res => {})
And you always need to add this line: req.end()
With this function we will order to finish sending request.
As in documentation is said:
With http.request() one must always call req.end() to signify the end of the request - even if there is no data being written to the request body.
Also reason can be because of using app instance of express instead of server from const server = http.createServer(app) while creating server socket .
Wrong
const express = require('express');
const http = require('http');
const WebSocket = require('ws');
const app = express();
app.use(function (req, res) {
res.send({ msg: "hello" });
});
const wss = new WebSocket.Server({ server: app }); // will throw error while connecting from client socket
app.listen(8080, function listening() {
console.log('Listening on %d', server.address().port);
});
Correct
const express = require('express');
const http = require('http');
const WebSocket = require('ws');
const app = express();
app.use(function (req, res) {
res.send({ msg: "hello" });
});
const server = http.createServer(app);
const wss = new WebSocket.Server({ server });
server.listen(8080, function listening() {
console.log('Listening on %d', server.address().port);
});
it's been a long time but another case is when performing requests which takes a long time on the server side (more then 2 minutes which is the default for express) and the timeout parameter was not configured in the server side. In my case I was doing client->server->server request (Node.js express) and I should set the timeout parameter on each request router on the server and on the client.
So in both servers I needed to set the request timeout by using
req.setTimeout([your needed timeout])
on the router.
I do both web (node) and Android development, and open Android Studio device simulator and docker together, both of them use port 8601, it complained socket hang up error, after close Android Studio device simulator and it works well in node side. Don’t use Android Studio device simulator and docker together.
There seems to be one additional case here, which is Electron not being a fan of the "localhost" domain name. In my case I needed to change this:
const backendApiHostUrl = "http://localhost:3000";
to this:
const backendApiHostUrl = "http://127.0.0.1:3000";
After that the problem just went away.
This means that DNS resolution (local or remote) might be causing some problems too.
I got a similar error when using CouchDB on OCP cluster.
const cloudantSessionStore = sessionStore.createSessionStore(
{
type: 'couchdb',
host: 'https://' + credentials['host'],
port: credentials['port'],
dbName: 'sessions',
options: {
auth: {
username: credentials['username'],
password: credentials['password']
},
cache: false
}
}
Which should be "http", not "https", to connect with my CouchDB instance. Hope it could be helpful for anyone who is faced with similar issue.
In my case, it was because a application/json response was badly formatted (contains a stack trace). The response was never send to the server.
That was very tricky to debug because, there were no log. This thread helps me a lot to understand what happens.
In case you're using node-http-proxy, please be aware to this issue, which will result a socket hang-up error : https://github.com/nodejitsu/node-http-proxy/issues/180.
For resolution, also in this link, simply move declaring the API route (for proxying) within express routes before express.bodyParser().
Ran into this issue yesterday running my web application and node.js server through IntelliJ IDEA 2016.3.6. All I had to do was clear my cookies and cache in my Chrome browser.
If you are experiencing this error over a https connection and it's happening instantly it could be a problem setting up the SSL connection.
For me it was this issue https://github.com/nodejs/node/issues/9845 but for you it could be something else. If it is a problem with the ssl then you should be able to reproduce it with the nodejs tls/ssl package just trying to connect to the domain
I think worth noting...
I was creating tests for Google APIs. I was intercepting the request with a makeshift server, then forwarding those to the real api. I was attempting to just pass along the headers in the request, but a few headers were causing a problem with express on the other end.
Namely, I had to delete connection, accept, and content-length headers before using the request module to forward along.
let headers = Object.assign({}, req.headers);
delete headers['connection']
delete headers['accept']
delete headers['content-length']
res.end() // We don't need the incoming connection anymore
request({
method: 'post',
body: req.body,
headers: headers,
json: true,
url: `http://myapi/${req.url}`
}, (err, _res, body)=>{
if(err) return done(err);
// Test my api response here as if Google sent it.
})
I my case it's was not an error, but expected behavior for chrome browser. Chrome keeps tls connection alive (for speed i think), but node.js server stop it after 2 min and you get an error.
If you try GET request using edge browser, there will be no error at all.
If you will close chrome window - you will get error right away.
So what to do?
1)You can filter this errors, because they are not really errors.
2)Maybe there is a better solution :)
After a long debug into node js code, mongodb connection string, checking CORS etc, For me just switching to a different port number server.listen(port); made it work, into postman, try that too. No changes to proxy settings just the defaults.
I was using nano, and it took me a long time to figure out this error. My problem was I was using the wrong port. I had port 5948 instead of 5984.
var nano = require('nano')('http://localhost:5984');
var db = nano.use('address');
var app = express();
Might be your server or Socket connection crashes unexpectedly.
I had this error when running two applications on the same port by mistake.
I had a next.js app and another one in nest.js, running both on port 8080, when I looked at the .env files I realized that they had the same port, so I changed the one from nest.js to 3000 and everything worked.
I'm not saying that this is the reason for the error but it's a possibility.
Your problem might also come from an attempt to connect to an HTTP URL while your service is only published on HTTPS...
Definitely a time-consuming mistake!
Got "[GET] localhost:4200, Socket hang up" during Azure Static Web App (SWA) Emulator for Angular app.
Solution is to remove this from angular.json:
"headers": {"cross-origin-opener-policy": "same-origin-allow-popups"}
I'd like to add a live functionality to a PHP based forum - new posts would be automatically shown to users as soon as they are created.
What I find a bit confusing is the interaction between the PHP code and NodeJS+socket.io.
How would I go about informing the NodeJS server about new posts and have the server inform the clients that are watching the thread in which the post was posted?
Edit
Tried the following code, and it seems to work, my only question is whether this is considered a good solution, as it looks kind of messy to me.
I use socket.io to listen on port 81 to clients, and the server running om port 82 is only intended to be used by the forum - when a new post is created, a PHP script sends a POST request to localhost on port 82, along with the data.
Is this ok?
var io = require('socket.io').listen(81);
io.sockets.on('connection', function(socket) {
socket.on('init', function(threadid) {
socket.join(threadid);
});
});
var forumserver = require('http').createServer(function(req, res) {
if (res.socket.remoteAddress == '127.0.0.1' && req.method == 'POST') {
req.on('data', function(chunk) {
data = JSON.parse(chunk.toString());
io.sockets.in(data.threadid).emit('new-post', data.content);
});
}
res.end();
}).listen(82);
Your solution of a HTTP server running on a special port is exactly the solution I ended up with when faced with a similar problem. The PHP app simply uses curl to POST to the Node server, which then pushes a message out to socket.io.
However, your HTTP server implementation is broken. The data event is a Stream event; Streams do not emit messages, they emit chunks of data. In other words, the request entity data may be split up and emitted in two chunks.
If the data event emitted a partial chunk of data, JSON.parse would almost assuredly throw an exception, and your Node server would crash.
You either need to manually buffer data, or (my recommendation) use a more robust framework for your HTTP server like Express:
var express = require('express'), forumserver = express();
forumserver.use(express.bodyParser()); // handles buffering and parsing of the
// request entity for you
forumserver.post('/post/:threadid', function(req, res) {
io.sockets.in(req.params.threadid).emit('new-post', req.body.content);
res.send(204); // HTTP 204 No Content (empty response)
});
forumserver.listen(82);
PHP simply needs to post to http://localhost:82/post/1234 with an entity body containing content. (JSON, URL-encoded, or multipart-encoded entities are acceptable.) Make sure your firewall blocks port 82 on your public interface.
Regarding the PHP code / forum's interaction with Node.JS, you probably need to create an API endpoint of sorts that can listen for changes made to the forum. Depending on your forum software, you would want to hook into the process of creating a new post and perform the API callback to Node.js at this time.
Socket.io out of the box is geared towards visitors of the site being connected on the frontend via Javascript. Upon the Node server receiving notification of a new post update, it would then notify connected clients of this new post and its details, at which point it would probably add new HTML to the DOM of the page the visitor is viewing.
You may want to arrange the Socket.io part of things so that users only subscribe to specific events being emitted by them being in a specific room such as "subforum123" so that they only receive notifications of applicable posts.