I have a script that sends MediaRecorder data over Websockets to the backend when the MediaStream is running. The backend then saves this data to a file to be replayed.
I am running into an issue where the result is coming out like this
The video is extremely glitchy (sometimes it comes out somewhat smooth but mostly it comes out glitchy) and I do not know why.
I have tested this on Chrome (latest version) and Microsoft Edge and both produce the same glitchy results. I did notice though that Firefox seems to be smoother but i did limited tests on Firefox.
My front end code:
const socket = io('/')
document.querySelector("#startReccy").addEventListener('click', async () => {
socket.emit('recID', ROOM_ID)
let screenStream = await navigator.mediaDevices.getDisplayMedia({
video: {
cursor: "never"
},
audio: true
})
let vidId = document.querySelector("#myLiveVid")
vidId.srcObject = screenStream;
let videoMediaRecObj = new MediaRecorder(screenStream, {
audioBitsPerSecond: 128000,
videoBitsPerSecond: 2500000
});
//Send to server
videoMediaRecObj.addEventListener('dataavailable', (e) => {
//console.log(e.data)
//socket.emit('writeInfoData', ROOM_ID, e.data)
e.data.arrayBuffer().then ( buf => {
socket.emit('writeInfoData', ROOM_ID, buf)
})
})
videoMediaRecObj.start(3000)
//When we stop
screenStream.addEventListener('ended', () => {
videoMediaRecObj.stop();
//screenStream.getTracks().forEach(track => track.stop());
})
})
The backend
const express = require('express')
const app = express()
server = require('http').Server(app)
const io = require('socket.io')(server)
var fs = require('fs');
const { v4: uuidV4 } = require('uuid')
app.set('view engine', 'ejs')
app.use(express.static("public"))
app.get("/", (req, res) => {
res.redirect(`${uuidV4()}`)
})
app.get('/:room', (req, res) => {
res.render('room', {roomId: req.params.room})
})
io.on('connection', socket => {
socket.on('screen-rec', (roomId)=> {
/* Seems to get room ID with no problem*/
console.log(`${roomId}`)
/* We join a room*/
socket.join(roomId)
socket.roomId = path.join(__dirname, 'media', roomId + '.webm')
socket.emit('startRecording')
})
/* Write media file to disk*/
socket.on('writeInfoData', (roomId, data) => {
/* trying to read the data */
//console.log(data) - This works
fs.appendFile(`file-${roomId}.webm`, data, function (err) {
if (err) {
throw err;
}
console.log(`${Date().toLocaleString()} - ${roomId} - Saved!`);
});
})
})
server.listen(3000, () => {
console.log("Listening")
})
I am thinking the issue is because they may be received out of order, which I guess i can solve by making continuous POST requests but I know Websockets run on TCP so the packets have to come in order and I am sending it every 2 seconds so it should have time to send the data and save to the disk before the next request comes in (not sure though, just my guess)
Is there something wrong in my implementation?
Edit: Solved by O. Jones. I thought having a bigger timing is better but it seems that I need a really small time. Thank you so much!
Each time MediaRecorder calls your ondataavailable handler it gives you a Blob in event.data. That Blob contains all the accumulated encoded data. At a combined datarate of 2.6 megabits per second, three seconds of that is almost a megabyte. That's a very large payload for a POST request, and for appendFile.
Call ondataavailable less often. Use .start(20) instead of .start(3000). You're still moving a lot of data, but moving it in smaller chunks makes it less likely that things will get out of order, or that you'll lose some.
And, consider using nodejs streams to write your files.
Related
I'm using node with the mqtt-connection and aedes mqtt libraries. I am using aedes to run a mqtt server, and I wish to connect to the mqtt server from within node using a stream, rather than open a tcp socket. Both libraries will accept a Duplex stream.
Why does something like this work:
const mqttCon = require('mqtt-connection');
const duplex = require('net').createConnection(1883);
const client = mqttCon(duplex, {
protocolVersion: 3
});
While something like this fails (closes the stream)after the first data exchange?
const mqttCon = require('mqtt-connection');
const duplex = new stream.Transform();
duplex._transform = (chunk, encoding, callback) => {
duplex.push(chunk);
console.log(chunk);
callback();
};
const client = mqttCon(duplex, {
protocolVersion: 3
});
aedes.handle(duplex);
I feel like I must have some fundamental misconception about how streams are supposed to work. Basically I want to create something that acts like a TCP socket, allowing these two "processes" to communicate internal to node.
Typical use of aedes to create a mqtt server would looking this this:
const aedes = require('aedes')(
{
concurrency: 500,
maxClientsIdLength: 100
}
);
const server = require('net').createServer(aedes.handle);
Edit: More details about the failure.
Then client feeds data to the stream, then as soon as aedes finishes responding, duplex.on('close') fires. There are no error messages, and no indication from either side that there has been an error, the stream just closes so each side then closes gracefully. I'm guessing that one side or the other sees an "end" to the stream so it closes.
The problem is that "duplex" is a single stream, where it really needs two streams bridged. Here is what works:
const duplexAedes = new stream.Duplex({
write: function (chunk, encoding, next) {
setImmediate(function () {
duplexClient.push(chunk);
});
next();
},
read: function (size) {
// Placeholder
}
});
const duplexClient = new stream.Duplex({
write: function (chunk, encoding, next) {
setImmediate(function () {
duplexAedes.push(chunk);
});
next();
},
read: function (size) {
// Placeholder
}
});
duplexAedes.authentication = clientId;
const client = mqttCon(duplexClient, {
protocolVersion: 3
});
aedes_handle(duplexAedes);
I'm making a website which tracks kill statistics in a game but I can only access 50 key value pairs per request, so to make sure that my data is accurate I want to make a request about every 30 seconds.
I feel like I may have gone wrong at some stage in the implementation. Like, maybe there is a way to make requests that doesn't involve using
express.get('/route/', (req, res) => { //code })
syntax and I just don't know about it. In short, what I want is for the database to be updated every 30 seconds without having to refresh the browser. I've tried wrapping my get request in a function and putting it in set interval but it still doesn't run.
const express = require('express');
const zkillRouter = express.Router();
const axios = require('axios');
const zkillDbInit = require('../utils/zkillTableInit');
const sqlite3 = require('sqlite3').verbose();
zkillDbInit();
let db = new sqlite3.Database('zkill.db');
setInterval(() => {
zkillRouter.get('/', (req, res)=>{
axios.get('https://zkillboard.com/api/kills/w-space/', {headers: {
'accept-encoding':'gzip',
'user-agent': 'me',
'connection': 'close'
}})
.then(response => {
// handle success
const wormholeData = (response.data)
res.status(200);
//probably not the best way to do this but it's fine for now.
for (let i = 0; i < Object.keys(wormholeData).length; i++) {
const currentZKillId = Object.keys(wormholeData)[i]
const currentHash = Object.values(wormholeData)[i]
let values = {
$zkill_id: currentZKillId,
$hash: currentHash
};
//puts it in the database
db.run(`INSERT INTO zkill (zkill_id, hash) VALUES ($zkill_id, $hash)`,
values
, function(err){
if(err){
throw(err)
}
})
}
})
})
}, 1000)
module.exports = zkillRouter;
One thing I have considered is that I don't necessarily need this functionality to be part of the same program. All I need is the database, so if I have to I could run this code separately in say, the terminal as just a little node program that makes requests to the api and updates the database. I don't think that would be ideal but if it's the only way to do what I want then I'll consider it.
clarification: the .get() method is called on zkillRouter which is an instance of express.Router() declared on line two. This in turn links back to my app.js file through an apiRouter, so the full route is localhost:5000/api/zkill/. That was a big part of the problem, I didn't know you could call axios.get without specifying a route, so I was stuck on this for a while.
I fixed it myself.
(edit 4)
I was using setInterval wrong.
I just got rid of the router statement as it wasn't needed.
I definitely need to tweak the interval so that I don't get so many sql errors for violating the unique constraint. Every 5 minutes should be enough I think.
Don't throw an error.
function myfunction(){
axios.get('https://zkillboard.com/api/kills/w-space/', {headers: {
'accept-encoding':'gzip',
'user-agent': 'me lol',
'connection': 'close'
}})
.then(response => {
// handle success
const wormholeData = (response.data)
for (let i = 0; i < Object.keys(wormholeData).length; i++) {
const currentZKillId = Object.keys(wormholeData)[i]
const currentHash = Object.values(wormholeData)[i]
let values = {
$zkill_id: currentZKillId,
$hash: currentHash
};
db.run(`INSERT INTO zkill (zkill_id, hash) VALUES ($zkill_id, $hash)`,
values
, function(err){
if(err){
return console.log(i);
}
})
}
})
}
setInterval(myfunction, 1000 * 30)
I am trying to check if an image exists in a folder.
If it exists I want to pipe its stream to res (I'm using Express)
If it does not exist I want to do another thing.
I created an async function that is supposed to either return the image's stream if it exists or false if it doesn't.
I get a stream when I do it but I get an infinite load on the browser, as if there was an issue with the stream.
Here is the minimal reproduction I could have :
Link to runnable code
const express = require('express');
const path = require('path');
const fs = require('fs');
const app = express();
app.get('/', async (req, res) => {
// Check if the image is already converted by returning a stream or false
const ext = 'jpg';
const imageConvertedStream = await imageAlreadyConverted(
'./foo',
1,
'100x100',
80,
ext
);
// Image already converted, we send it back
if (imageConvertedStream) {
console.log('image exists');
res.type(`image/${ext}`);
imageConvertedStream.pipe(res);
return;
} else {
console.log('Image not found');
}
});
app.listen(3000, () => {
console.log('Server started on port 3000');
});
async function imageAlreadyConverted(
basePath,
id,
size,
quality,
extWanted
) {
return new Promise(resolve => {
// If we know the wanted extension, we check if it exists
let imagePath;
if (extWanted) {
imagePath = path.join(
basePath,
size,
`img_${id}_${quality}.${extWanted}`
);
} else {
imagePath = path.join(basePath, size, `img_${id}_${quality}.jpg`);
}
console.log(imagePath);
const readStream = fs.createReadStream(imagePath);
readStream.on('error', () => {
console.log('error');
resolve(false);
});
readStream.on('readable', () => {
console.log('readable');
resolve(readStream);
});
});
}
95% of my images will be available and I need performance, I suppose checking with fs.stats and then creating the stream is taking longer than trying to create the stream and handling the error.
The issue was with the "readable" event. Once I switched to the "open" event, everything is fine.
I have Node App that collects vote submissions and stores them in Cassandra. The votes are stored as base64 encoded encrypted strings. The API has an endpoint called /export that should get all of these votes strings (possibly > 1 million), convert them to binary and append them one after the other in a votes.egd file. That file should then be zipped and sent to the client. My idea is to stream the rows from Cassandra, converting each vote string to binary and writing to a WriteStream.
I want to wrap this functionality in a Promise for easy use. I have the following:
streamVotesToFile(query, validVotesFileBasename) {
return new Promise((resolve, reject) => {
const writeStream = fs.createWriteStream(`${validVotesFileBasename}.egd`);
writeStream.on('error', (err) => {
logger.error(`Writestream ${validVotesFileBasename}.egd error`);
reject(err);
});
writeStream.on('drain', () => {
logger.info(`Writestream ${validVotesFileBasename}.egd error`);
})
db.client.stream(query)
.on('readable', function() {
let row = this.read();
while (row) {
const envelope = new Buffer(row.vote, 'base64');
if(!writeStream.write(envelope + '\n')) {
logger.error(`Couldn't write vote`);
}
row = this.read()
}
})
.on('end', () => { // No more rows from Cassandra
writeStream.end();
writeStream.on('finish', () => {
logger.info(`Stream done writing`);
resolve();
});
})
.on('error', (err) => { // err is a response error from Cassandra
reject(err);
});
});
}
When I run this it is appending all the votes to a file and downloading fine. But there are a bunch of problems/questions I have:
If I make a req to the /export endpoint and this function runs, while it's running all other requests to the app are extremely slow or just don't finish before the export request is done. I'm guessing because the event loop being hogged by all of these events from the Cassandra stream (thousands per second) ?
All the votes seem to write to the file fine yet I get false for almost every writeStream.write() call and see the corresponding logged message (see code) ?
I understand that I need to consider backpressure and the 'drain' event for the WritableStream so ideally I would use pipe() and pipe the votes to a file because that has built in backpressure support (right?) but since I need to process each row (convert to binary and possible add other data from other row fields in the future), how would I do that with pipe?
This the perfect use case for a TransformStream:
const myTransform = new Transform({
readableObjectMode: true,
transform(row, encoding, callback) {
// Transform the row into something else
const item = new Buffer(row['vote'], 'base64');
callback(null, item);
}
});
client.stream(query, params, { prepare: true })
.pipe(myTransform)
.pipe(fileStream);
See more information on how to implement a TransformStream in the Node.js API Docs.
I have an android game that has 40,000 users online. And each user send request to server every 5 second.
I write this code for test request:
const express = require('express')
const app = express()
const pg = require('pg')
const conString = 'postgres://postgres:123456#localhost/dbtest'
app.get('/', function (req, res, next) {
pg.connect(conString, function (err, client, done) {
if (err) {
return next(err)
}
client.query('SELECT name, age FROM users limit 1;', [], function (err, result) {
done()
if (err) {
return next(err)
}
res.json(result.rows)
})
})
})
app.listen(3000)
Demo
And for test this code with 40,000 requests I write this ajax code:
for (var i = 0; i < 40000; i++) {
var j = 1;
$.ajax({
url: "http://85.185.161.139:3001/",
success: function(reponse) {
var d = new Date();
console.log(j++, d.getHours() + ":" + d.getMinutes() + ":" + d.getSeconds());
}
});
}
SERVER detail(I know this is poor)
Questions:
this code (node js)only response 200 requests per second!
how can improve my code for increase number response per second?
this way(ajax) for simulate 40,000 online users is correct or not?
if i use socket is better or not?
You should use Divide&Conquer algorithm for solving such problems. Find the most resource inefficient operation and try to replace or reduce an amount of calls to it.
The main problem that I see here is that server open new connection to database on each request which possibly takes most of the time and resources.
I suggest to open connection when the server boots up and reuse it in requests.
const express = require('express')
const app = express()
const pg = require('pg')
const conString = 'postgres://postgres:123456#localhost/dbtest'
const pgClient
pg.connect(conString, function (err, client, done) {
if (err) {
throw err
}
pgClient = client
})
app.get('/', function (req, res, next) {
pgClient.query('SELECT name, age FROM users limit 1;', [], function (err, result) {
if (err) {
return next(err)
}
res.json(result.rows)
})
})
app.listen(3000)
For proper stress load testing better use specialized utilities such as ab from Apache. Finally, sockets are better for rapid, small data transfer but remember it has problems with scaling and in most cases became very inefficient at 10K+ simultaneous connections.
EDIT: As #robertklep pointed out, better use client pooling in this case, and retrieve client from pool.