I'm new to webrtc so and i'm using the simple-peer package ! and i'm working on like a discord clone where someone opens a room then other people can go and join ! sometimes the connections between them work perfectly fine ! and sometimes it works just from one side or not work at all !
this is the peer config that i'm using
export const prepareNewPeerConnection = (connUserSocketId, isInitiator) => {
const localStream = store.getState().room.localStream;
if (isInitiator) {
console.log("preparing as initiator");
} else {
console.log("preparing not as an initiator");
}
peers[connUserSocketId] = new Peer({
initiator: isInitiator,
config: {
iceServers: [{ urls: "stun:stun.l.google.com:19302" }],
},
stream: localStream,
});
peers[connUserSocketId].on("signal", (data) => {
const signalData = {
signal: data,
connUserSocketId,
};
signalPeerData(signalData);
});
peers[connUserSocketId].on("stream", (remoteStream) => {
remoteStream.connUserSocketId = connUserSocketId;
addNewRemoteStream(remoteStream);
});
};
i have seen people talking about the turn and stun servers which i don't really understand how they work ! can they be the problem in here ?
Related
I'm building a web-based remote control application for the music program Ableton Live. The idea is to be able to use a tablet on the same local network as a custom controller.
Ableton Live runs Python scripts, and I use this library that exposes the Ableton Python API to Node. In Node, I'm building an HTTP/Websocket server to serve my React frontend and to handle communication between the Ableton Python API and the frontend running Redux/RTK Query.
Since I both want to send commands from the frontend to Ableton Live, and be able to change something in Ableton Live on my laptop and have the frontend reflect it, I need to keep a bi-directional Websocket communication going. The frontend recreates parts of the Ableton Live UI, so different components will care about/subscribe to different small parts of the whole Ableton Live "state", and will need to be able to update just those parts.
I tried to follow the official RTK Query documentation, but there are a few things I really don't know how to solve the best.
RTK Query code:
import { createApi, fetchBaseQuery } from '#reduxjs/toolkit/query/react';
import { LiveProject } from '../models/liveModels';
export const remoteScriptsApi = createApi({
baseQuery: fetchBaseQuery({ baseUrl: 'http://localhost:9001' }),
endpoints: (builder) => ({
getLiveState: builder.query<LiveProject, void>({
query: () => '/completeLiveState',
async onCacheEntryAdded(arg, { updateCachedData, cacheDataLoaded, cacheEntryRemoved }) {
const ws = new WebSocket('ws://localhost:9001/ws');
try {
await cacheDataLoaded;
const listener = (event: MessageEvent) => {
const message = JSON.parse(event.data)
switch (message.type) {
case 'trackName':
updateCachedData(draft => {
const track = draft.tracks.find(t => t.trackIndex === message.id);
if (track) {
track.trackName = message.value;
// Components then use selectFromResult to only
// rerender on exactly their data being updated
}
})
break;
default:
break;
}
}
ws.addEventListener('message', listener);
} catch (error) { }
await cacheEntryRemoved;
ws.close();
}
}),
})
})
Server code:
import { Ableton } from 'ableton-js';
import { Track } from 'ableton-js/ns/track';
import path from 'path';
import { serveDir } from 'uwebsocket-serve';
import { App, WebSocket } from 'uWebSockets.js';
const ableton = new Ableton();
const decoder = new TextDecoder();
const initialTracks: Track[] = [];
async function buildTrackList(trackArray: Track[]) {
const tracks = await Promise.all(trackArray.map(async (track) => {
initialTracks.push(track);
// A lot more async Ableton data fetching will be going on here
return {
trackIndex: track.raw.id,
trackName: track.raw.name,
}
}));
return tracks;
}
const app = App()
.get('/completeLiveState', async (res, req) => {
res.onAborted(() => console.log('TODO: Handle onAborted error.'));
const trackArray = await ableton.song.get('tracks');
const tracks = await buildTrackList(trackArray);
const liveProject = {
tracks // Will send a lot more than tracks eventually
}
res.writeHeader('Content-Type', 'application/json').end(JSON.stringify(liveProject));
})
.ws('/ws', {
open: (ws) => {
initialTracks.forEach(track => {
track.addListener('name', (result) => {
ws.send(JSON.stringify({
type: 'trackName',
id: track.raw.id,
value: result
}));
})
});
},
message: async (ws, msg) => {
const payload = JSON.parse(decoder.decode(msg));
if (payload.type === 'trackName') {
// Update track name in Ableton Live and respond
}
}
})
.get('/*', serveDir(path.resolve(__dirname, '../myCoolProject/build')))
.listen(9001, (listenSocket) => {
if (listenSocket) {
console.log('Listening to port 9001');
}
});
I have a timing issue where the server's ".ws open" method runs before the buildTrackList function is done fetching all the tracks from Ableton Live. These "listeners" I'm adding in the ws-open-method are callbacks that you can attach to stuff in Ableton Live, and the one in this example will fire the callback whenever the name of a track changes. The first question is if it's best to try to solve this timing issue on the server side or the RTK Query side?
All examples I've seen on working with Websockets in RTK Query is about "streaming updates". But since the beginning I've thought about my scenario as needing bi-directional communication using the same Websocket connection the whole application through. Is this possible with RTK Query, and if so how do I implement it? Or should I use regular query endpoints for all commands from the frontend to the server?
I am trying to use apollo/graphql subscription in my nextjs project, my graphql server is placed in external nextjs service,I can work with queries and mutation without any problem but when I use an implementation of useSubscription I get the following error:
"Error: Observable cancelled prematurely
at Concast.removeObserver (webpack-internal:///../../node_modules/#apollo/client/utilities/observables/Concast.js:118:33)
at eval (webpack-internal:///../../node_modules/#apollo/client/utilities/observables/Concast.js:21:47)
at cleanupSubscription (webpack-internal:///../../node_modules/zen-observable-ts/module.js:92:7)
at Subscription.unsubscribe (webpack-internal:///../../node_modules/zen-observable-ts/module.js:207:7)
at cleanupSubscription (webpack-internal:///../../node_modules/zen-observable-ts/module.js:97:21)
at Subscription.unsubscribe (webpack-internal:///../../node_modules/zen-observable-ts/module.js:207:7)
at eval (webpack-internal:///../../node_modules/#apollo/client/react/hooks/useSubscription.js:106:26)
at safelyCallDestroy (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:22763:5)
at commitHookEffectListUnmount (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:22927:11)
at invokePassiveEffectUnmountInDEV (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:24998:13)
at invokeEffectsInDev (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:27137:11)
at commitDoubleInvokeEffectsInDEV (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:27110:7)
at flushPassiveEffectsImpl (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:26860:5)
at flushPassiveEffects (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:26796:14)
at eval (webpack-internal:///../../node_modules/react-dom/cjs/react-dom.development.js:26592:9)
at workLoop (webpack-internal:///../../node_modules/scheduler/cjs/scheduler.development.js:266:34)
at flushWork (webpack-internal:///../../node_modules/scheduler/cjs/scheduler.development.js:239:14)
at MessagePort.performWorkUntilDeadline (webpack-internal:///../../node_modules/scheduler/cjs/scheduler.development.js:533:21)"
I know that the subscriptions server is working right because I can to listening from apollo studio and I have created a spa with create-react-app and it works fine
I have used:
Server:
"apollo-server-express": "^3.6.7"
"graphql-ws": "^5.7.0"
Client
"next": "^12.1.5"
"#apollo/client": "^3.5.10"
"graphql-ws": "^5.7.0"
Hook implementation
const room = useSubscription(
gql`
subscription onRoomAdded($roomAddedId: ID!) {
roomAdded(id: $roomAddedId) {
id
name
}
}
`
);
Client implementation
import { ApolloClient, HttpLink, InMemoryCache, split } from '#apollo/client';
import { GraphQLWsLink } from '#apollo/client/link/subscriptions';
import { getMainDefinition } from '#apollo/client/utilities';
import { createClient } from 'graphql-ws';
import fetch from 'isomorphic-fetch';
const HOST = 'http://localhost:3001/graphql';
const HOST_WS = 'ws://localhost:3001/graphql';
const isServer = typeof window === 'undefined';
if (isServer) {
global.fetch = fetch;
}
const httpLink = new HttpLink({
uri: HOST,
});
const link = isServer
? httpLink
: split(
({ query }) => {
const definition = getMainDefinition(query);
return (
definition.kind === 'OperationDefinition' &&
definition.operation === 'subscription'
);
},
new GraphQLWsLink(
createClient({
url: HOST_WS,
})
),
httpLink
);
const client = new ApolloClient({
ssrMode: isServer,
link,
cache: new InMemoryCache(),
});
export default client;
any idea about the problem? I think the problem could be that NextJS only works with subscriptions-transport-ws but in the official apollo documentation indicates that the new official way is to use graphql-ws the other library is unmaintained already
UPDATE!
I have checked that the subscriptions are working right in production build, I'm investigating how to implement in development process. any suggestions are welcome.
If it is working in production, but in not in dev, you may have the same issue I had with my React SPA: StrictMode and double rendering as described in this github issue.
So far I have found 2 ways to make it work:
remove StrictMode
subscribe with vanilla JS instead ofuseSubscription
const ON_USER_ADDED = gql`
subscription OnUserAdded {
userAdded {
name
id
}
}
`;
const subscribe = () => {
client.subscribe({
query: ON_USER_ADDED,
}).subscribe({
next(data) {
console.log('data', data);
},
complete(){
console.log('complete');
},
error(err) {
console.log('error', err);
}
})
};
Im trying to listen to Transfer events but it works for couple of minutes then process terminates. I believe thats because of the blockchain node I use but not sure. Can't find anything other so.
How can I keep the connection and Listen to Transfer events 24/7
const web3 = new Web3(new Web3.providers.WebsocketProvider('wss://bsc-ws-node.nariox.org:443'))
const contract = await new web3.eth.Contract(
ABI,
contracts[0]
)
contract.events
.Transfer({
fromBlock: 'latest',
filter: { from: contracts[1] }
})
.on('data', async (event: EventData) => {
const {
transactionHash,
returnValues: { value }
} = event
....
})
I'm attempting to complete this tutorial and I'm getting this error in the browser console.
index.js:16 Uncaught TypeError: this.io.on is not a function
at index.js:16
This is the full contents of my index.js.
navigator.getUserMedia(
{ video: true, audio: true },
stream => {
const localVideo = document.getElementById("local-video");
if (localVideo) {
localVideo.srcObject = stream;
}
},
error => {
console.warn(error.message);
}
);
this.io.on("connection", socket => {
const existingSocket = this.activeSockets.find(
existingSocket => existingSocket === socket.id
);
if (!existingSocket) {
this.activeSockets.push(socket.id);
socket.emit("update-user-list", {
users: this.activeSockets.filter(
existingSocket => existingSocket !== socket.id
)
});
socket.broadcast.emit("update-user-list", {
users: [socket.id]
});
}
})
What am I missing? I know that 'this' should refer to some enclosing object but what?
Checkout tutorial files list
The part you are referrring to is in socket-connection.ts file, not index.js as author refers.
Or it may be a typescript abrakadabra :)
P.S. and you should also remember that "this" construct is always inside a function (but not in arrow function).
I'm trying to send messages between two IPFS nodes.
The daemon that I'm running is based on go-ipfs, and is running with the flag:
ipfs daemon --enable-pubsub-experiment
I've coded two .js files, one is for the subscriber:
const IPFS = require('ipfs')
const topic = 'topic';
const Buffer = require('buffer').Buffer;
const msg_buffer = Buffer.from('message');
const ipfs = new IPFS({
repo: repo(),
EXPERIMENTAL: {
pubsub: true
},
config: {
Addresses: {
Swarm: [
'/dns4/ws-star.discovery.libp2p.io/tcp/443/wss/p2p-websocket-star'
]
}
}
})
ipfs.once('ready', () => ipfs.id((err, info) => {
if (err) { throw err }
console.log('IPFS node ready with address ' + info.id)
subscribeToTopic()
}))
function repo () {
return 'ipfs-' + Math.random()
}
const receiveMsg = (msg) => {
console.log(msg.data.toString())
}
const subscribeToTopic = () => {
ipfs.pubsub.subscribe(topic, receiveMsg, (err) => {
if (err) {
return console.error(`failed to subscribe to ${topic}`, err)
}
console.log(`subscribed to ${topic}`)
})
}
And one is for the publisher:
const IPFS = require('ipfs');
const topic = 'topic';
const Buffer = require('buffer').Buffer;
const msg_buffer = Buffer.from('message');
const ipfs = new IPFS({
repo: repo(),
EXPERIMENTAL: {
pubsub: true
},
config: {
Addresses: {
Swarm: [
'/dns4/ws-star.discovery.libp2p.io/tcp/443/wss/p2p-websocket-star'
]
}
}
})
ipfs.once('ready', () => ipfs.id((err, info) => {
if (err) { throw err }
console.log('IPFS node ready with address ' + info.id)
publishToTopic()
}))
function repo () {
return 'ipfs-' + Math.random()
}
const publishToTopic = () => {
ipfs.pubsub.publish(topic, msg_buffer, (err) => {
if (err) {
return console.error(`failed to publish to ${topic}`, err)
}
// msg was broadcasted
console.log(`published to ${topic}`)
console.log(msg_buffer.toString())
})
}
I've runned the .js scripts with:
node file.js
But the subscriber didn't receive any message from the subscriber and I don't know why.
What is the correct way to connect two nodes in this case?
Maybe I'm wrong but the npm package ipfs is an entire implementation of IPFS protocol and it creates a node when the constructor is called, that's why ipfs daemon ... is not necesary. If you need to use it as API with the ipfs daemon you can use the ipfs-http-client package.
You can use ipfs-pubsub-room and it has a working example based on this package ipfs-pubsub-room-demo.
I hope it helps, I'm still learning this tech too.
Currently (2019-09-17) most nodes in the ipfs network don't have pubsub enabled, so chances your pubsub messages will pass through are slim.
You can try to establish direct connection between your nodes, as explained here:managing swarm connections in ipfs
Essentially:
Run "ipfs id" on internet accessible node
Inspect output, get the address (it should look like this /ip4/207.210.95.74/tcp/4001/ipfs/QmesRgiWSBeMh4xbUEHUKTzAfNqihr3fFhmBk4NbLZxXDP
)
On the other node establish direct connection:
ipfs swarm connect /ip4/207.210.95.74/tcp/4001/ipfs/QmesRgiWSBeMh4xbUEHUKTzAfNqihr3fFhmBk4NbLZxXDP
Please see the ipfs Github example, as it shows how to connect two js-ipfs browser nodes together via WebRTC.