In the code below:
The client is supposed to start emitting images as long Base64 Strings to the server, one after another at close intervals.
As soon as the first emit with a Base64 String takes place, the socket is disconnected. I tried with hardcoded 2-character strings and this doesn't happen.
I believe it's either the size of the data that is the problem or the fact that socket hasn't finished emitting the first Base64 string when the 2nd and 3rd etc.. emit comes.
const io = new Server(httpServer, {
cors: {
credentials: true,
origin: 'http://localhost:4200',
methods: ["GET", "POST"]
}
})
io.on('connection', function(socket) {
console.log('connected')
socket.on('newDataFromClient', async (newDataFromTheClient) => {
// will work only after I refresh the page after a login
})
socket.on('disconnect', (reason) => {
// the reason is "transport error"
console.log('disconnected due to = ', reason)
})
})
httpServer.listen(4200)
This is the client-side code:
let socket
// this is only executed once, on page load
function setup() {
fetch('/setup')
.then((response) => response.json())
.then((setup) => {
doSomething(setup)
socket = io('http://localhost:4200', {
withCredentials: true,
transports: ['websocket']
})
})
}
// this is executed repeatedly at close intervals
async function sendToServer(imageAsBase64) {
socket.emit('newDataFromClient', imageAsBase64)
}
What am I doing wrong?
The problem was socket.io's maxHttpBufferSize limit, set to 1Mb by default.
Each Base64 string I'm sending is around 2.5Mb.
I had to update my server-side code to
const io = new Server(httpServer, {
cors: {
origin: 'http://localhost:4200',
methods: ["GET", "POST"]
},
maxHttpBufferSize: 4e6 // 4Mb
})
And now everything works. I found the answer here.
Related
Hello guys i have simple example with sockets:
const app = express();
const server = require("http").createServer(app);
const io = require("socket.io")(server, {
cors: {
origin: "http://localhost:3000",
methods: ["GET", "POST"],
credentials: true
}
});
var userCount = 0;
io.on('connection', function (socket) {
userCount++;
io.emit('userCount', { userCount: userCount });
socket.on('disconnect', function() {
userCount--;
io.emit('userCount', { userCount: userCount });
});
});
and frontend:
const [userCount, setuserCount] = useState(0);
socket.on('userCount', function (data) {
setuserCount(data.userCount);
});
I dont understand, but it fire so much requests ..
And my question is this proper way to work with sockets?
The issue seems to be with your frontend. The following code runs multiple times everytime your component renders:
socket.on('userCount', function (data) {
setuserCount(data.userCount);
});
This means that you're adding multiple event listener functions for the one userCount event. To fix this, you can use React's useEffect() hook, which you can use to run code once when your component mounts:
import React, {useState, useEffect} from "react";
...
// Inside your component:
const [userCount, setuserCount] = useState(0);
useEffect(() => {
const listener = data => setuserCount(data.userCount);
socket.on('userCount', listener);
return () => socket.off('userCount', listener);
}, [setuserCount]);
This way your listener will only be added once when your component mounts, and not multiple times. The cleanup function returned from the useEffect hook will also allow you to remove the listener when the component unmounts (thanks #Codebling for this suggestion). Your socket.on callback will still execute multiple times as socket.io will call it whenever your event occurs.
I have found similar code here (http://sahatyalkabov.com/jsrecipes/#!/backend/who-is-online-with-socketio) and yes, this is the correct way to use sockets. It fires so many requests because every time a user connects and every time a user disconnects a message is fired (including when you reload. when you reload it fires twice, 1 for getting out and 1 for getting back on the site).
I have an application that runs on Nodejs and Socket.io
The events 2, 3 seen in the image are empty events that socket.io server keeps emitting at random intervals. What's more annoying is that the time delay between the empty events is crazy, as seen in the picture there was a 19 seconds time delay between 2 and 3. And due to this, the message:delete event that was emitted by socket.io client which was supposed to immediately emit message:deleted, got delayed by 19 seconds!!
What's worse is that while these empty events are emitted, all other socket events originated from socket client are not emitted and they are stuck in limbo until the empty events are out of the way.
I suspect these empty events are socket ping and pong events. Does anyone have any idea why this happens? And how we could prevent this? Or maybe even set a priority for client socket.emit so that regardless of these empty events, the client emitted socket event takes precedence and is fired immediately?
Edit #1
Despite adding { transports : ['websocket'] } the delay was still well over 17 seconds as you can see in this image
Edit #2 - The socket.emit code
The client is made up of a lot of jQuery + vanilla JS + Angular 1.7. The client socket is a factory wrapper
app.factory('socket',['$rootScope', function ($rootScope) {
var socket = io.connect('',{
path: '/socket.io',
transports: ['websocket']
});
return {
on: function (eventName, callback) {
socket.on(eventName, function () {
var args = arguments;
$rootScope.$apply(function () {
callback.apply(socket, args);
});
});
},
emit: function (eventName, data, callback) {
socket.emit(eventName, data, function () {
var args = arguments;
$rootScope.$apply(function () {
if (callback) {
callback.apply(socket, args);
}
});
})
},
disconnect: function(close){
socket.disconnect(close);
},
removeAllListeners: function (eventName, callback) {
socket.removeAllListeners(eventName, function() {
var args = arguments;
$rootScope.$apply(function () {
callback.apply(socket, args);
});
});
}
};
}])
And on the controller the `socket.emit' code is as follows
$scope.deleteMessage = function (message) {
socket.emit('message:delete', {
id: message
});
};
I use express + Nodejs so here's the code for initializing the server socket
Server = require('socket.io'),
io = new Server({
path: `/socket.io`,
transports: ['websocket']
});
And the server side socket code listens to the client emit as follows
io.on("connection", async function (socket) {
socket.on("message:delete", function (payload) {
if (props.connectedUsers[socket.userId].info.isAdmin) {
console.log("Received Message Deletion by",
props.connectedUsers[socket.userId].info.name);
if (payload.id == "SYS_MOTD") return;
await Message.remove({
_id: new db.Types.ObjectId(payload.id)
}).exec(function (err, message) {
if (!err) {
console.log("Emit message:deleted");
io.sockets.emit("message:deleted", {
id: payload.id
});
}
});
}
});
});
I think it's normal, according to the Socket.IO documentation:
By default, a long-polling connection is established first, then
upgraded to “better” transports (like WebSocket).
What you can do is to set the connection to use websocket only, it should stop the long-polling and you will receive the messages immediately:
socket = io.connect({transports: ['websocket']});
Update
No need to set the path to /socket.io, it's the default value.
On Client side you can also use:
const socket = io([URL], {
transports: ['websocket']
});
I've noticed you are using async/await with a callback, I would either use async/await or a callback on your server side code, for example:
try {
await Message.remove({
_id: new db.Types.ObjectId(payload.id)
})
console.log("Emit message:deleted");
io.sockets.emit("message:deleted", {
id: payload.id
});
} catch(ex) {
console.log(ex.message);
}
I am using mqttjs and socketio on my nodejs backend.
I am using angular as my frontend framework.
On my frontend there are 3 routes.
All requires socket connection for real time data.
So on ngOnInit i run client side socket io connection code and on ngOnDestroy I will run socket disconnect as well.
And in my server side code (index.js) there are mainly 3 actions that is happening.
const io = require('socket.io')(server)
mqtt.createConnection();
mqtt.mqttSubscriptions(io);
mqtt.mqttMessages(io);
These are the mqtt methods:
const createConnection = () => {
let options = {
protocol: 'mqtt',
clientId: process.env.MQTT_CLIENT_ID,
username: process.env.MQTT_USERNAME,
password: process.env.MQTT_PASSWORD,
};
client = mqtt.connect(process.env.MQTT_HOST, options);
client.on('connect', function() {
winston.info('MQTT connected');
});
client.on('error', function(err) {
winston.error(err);
});
};
const mqttSubscriptions = io => {
winston.info(`Socket connected.`);
client.subscribe([TOPICS.DATA], function(error, granted) {
if (error) {
winston.error(error);
}
winston.info('Topics: ', granted);
});
};
const mqttMessages = io => {
io.sockets.on('connection', socket => {
winston.info(`Socket connected.`);
client.on('message', function(topic, message) {
let payload = JSON.parse(message.toString());
winston.info(topic);
winston.info(payload.id);
switch (topic) {
case TOPICS.DATA:
dataController.storeData(payload, io);
break;
default:
winston.error('Wrong topic');
break;
}
});
});
};
And on the datacontroller I am running
socket.emit()
My problem is everytime I navigate to a route and come back the dataController.storeData is called multiple times.
That is when I am at route A, and then navigate to route B and then back to A and then to C, the data is multiplied that many times of my route navigation. (In this case 4 times.)
I found that it is socket io and mqtt connection problem, but I don't know how to solve, since I am new to both of these.
Any help?
I have a Koajs node app in a docker container on an EC2 instance. The app is behind an AWS Application Load Balancer.
The app simply takes a POSTed file and responds with a stream that the client can view events on.
So my server is doing the right thing (sending file data), and my client is doing the right thing (receiving file data and sending back progress), but the ALB is timing out. I don't understand why it's timing out. Both client and server are sending and receiving data to/from each other, so I would think that would qualify as keep alive traffic.
Here's the code that each is running.
Client:
const request = require('request-promise');
const fs = require('fs');
const filePath = './1Gfile.txt';
const file = fs.createReadStream(filePath);
(async () => {
// PUT File
request.put({
uri: `http://server/test`,
formData: { file },
headers: { Connection: 'keep-alive' },
timeout: 200000,
})
.on('data', (data) => {
const progressString = data.toString();
console.log({ progressString });
});
})();
Server:
const { Readable } = require('stream');
const Koa = require('koa');
const router = require('koa-router')();
(async () => {
const app = module.exports = new Koa();
router.get('/healthcheck', async (ctx) => {
ctx.status = 200;
});
router.put('/test', test);
async function test(ctx) {
const read = new Readable({
objectMode: true,
read() { },
});
ctx.body = read;
let i = 1;
setInterval(() => {
read.push(`${process.hrtime()}, ${i}`);
ctx.res.write('a');
i++;
}, 3000);
}
app.use(router.routes());
app.use(router.allowedMethods());
app.listen(3000, (err) => {
if (err) throw err;
console.info(`App started on port 3000 with environment localhost`);
});
})();
Both server and client are logging the correct things, but the ALB just times out at whatever I set it's idle timeout to. Is there some trick to tell the ALB that traffic is really flowing?
Thanks so much for any light you can shed on it.
Just a quick guess, you need to enable keepAlive when using the request-promise. add forever: true in options. Try this:
request.put({
uri: `http://server/test`,
formData: { file },
headers: { Connection: 'keep-alive' },
timeout: 200000,
forever: true,
})
We have a similar issue about timeout when using request-promise-native. We fixed by adding this option. Hopfully it works out for you.
I try to consume basic Speech-To-Text service through websocket using ws package. But after successfully open the connection and send initial message, I never get the listening state.
I also try to send the audio and empty binary (to indicate that uploading process is done), but the server always return close with code 1000.
Following is my code
'use strict';
var fs = require('fs');
var request = require('request');
var WS = require('ws');
var wsURI = 'wss://stream.watsonplatform.net/speech-to-text/api/v1/recognize?watson-token=[TOKEN]&model=en-UK_NarrowbandModell&x-watson-learning-opt-out=1';
var getTokenForm = {
method: 'GET',
uri: 'https://[USER_ID]:[PASSWORD]#stream.watsonplatform.net/authorization/api/v1/token?url=https://stream.watsonplatform.net/speech-to-text/api',
};
var filepath = 'C:/Temp/test1.wav';
request(getTokenForm, function(error, response, body) {
wsURI = wsURI.replace('[TOKEN]', body);
var message = {
'action': 'start',
'content-type': 'audio/wav',
'continuous': true,
'inactivity_timeout': -1
};
var ws = new WS(wsURI);
['message', 'error', 'close', 'open', 'connection'].forEach(function(eventName) {
ws.on(eventName, console.log.bind(console, eventName + ' event: '));
});
ws.on('open', function(evt) {
ws.send(JSON.stringify(message));
setTimeout(function timeout() {
var readStream = fs.createReadStream(filepath);
readStream.on('data', function(data) {
ws.send(data, {
binary: true,
mask: false,
});
});
readStream.on('end', function() {
ws.send(new Buffer(0), {
binary: true,
mask: false,
});
});
}, 1000);
});
ws.on('close', function(data) {
console.log(data)
});
});
Also try to send the file directly (without the stream).
var sound = fs.readFileSync(filepath);
ws.send(sound, { binary: true, mask: false});
And try to add custom header Authorization
var authorization = 'Basic ' + new Buffer('USER_ID:PASSWORD').toString('base64');
var ws = new WS(wsURI, {
headers: {
'Authorization': authorization,
}
});
But no luck so far.
there are a couple of things here. The main issue is that the model in the querystring has a typo - there should be only one 'l' on the end. (Although, not responding with an error message is a bug in the service that I'm going to report to the team.)
So, fix that and you get an error that frames should be masked. That's an easy fix, just switch mask: false to true in both places.
Then, once you've finished sending your audio & ending message, the service will send your final results and then another {"state": "listening"} message. This second state: listening should be your trigger to close the connection. Otherwise it will eventually timeout and close automatically (inactivity_timeout applies when you're sending audio with no speech in it, not when you aren't sending any data at all.)