Full WebRTC stack [closed] - node.js

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to understand what is the right tools for creating an app with text, audio and video exchange. As server-side tool i want to use node.js.
There is a lots of examples which shows you client-side code, but what about server-side? I know, that for WebRTC server is needed only for signaling purposes, but i cant find a descriptive giude helping me to find out how to say:"Hey, here is this guy, he wants to talk to you, so here's his IP" or something simmilar.
How to make sure that i am establishing direct conection? How to establish the most performant kind of connection? I know that there is some NAT traversal protocols, but how i can explicitly use/enable them?
Can i create distributed Skype-like network by connecting many-to-many peers and having some signaling/auth servers? Or maybe use some peers as servers for signaling only?

All the above can be obtained with software such as Flash Media Server (or Red5 for open source). If you want to use Node.JS, you will need to either create your Node services (message queue, media server), or use some already available, and have Node.JS handle the interaction between them. So all of these will be needed:
Node web service(s) with web sockets
Node / other message broker (mq)
Node / other media server (FMS, Red5)
Optional, a caching service for multiple Node web services (Redis)
You can choose Flash, it has great support for RTMFP/RTMP. If you really want WebRTC, you will have to create a STUN node service for p2p discovery, which is connected to the caching service to handle authorizations.
RTMFP is an option, webrtc too. Most performant depends on how you define performance: quality? latency? how should it be biased? If you want low latency, go for p2p. If you want recording capabilities, either rtmp or a node webrtc relay.
Yes, but you will most likely need a team to do that :)

Found almost all of the answers :)
There are no any restrictions for server-side tools, so you can use whatever you want. Node.js fits great. For communication purposes WebSockets or XHR can be used, but there is no restrictions for any others.
In the process of creating connections browser will generate some events with all needed data. You just need to send it to both sides and process it there. There is also offer/answer system, so connection can be made only with agreement from both sides.
Browser will try to establish best connection possible by default. If it's can't be done it will fallback to TURN, which translates data through server.
It will be possibe when DataChannels will be implemented.

Related

How PWA can be useful rather than developing a Simple web Application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I know I am very bad at asking a question so please tell me rather than downvote me...thanx
Q1: How Is Progressive Web Application Useful?
Q2: What Type of Application Should be Built using PWA is there any Specific Application which contains a lot of CPU utilization or Simple Static Pages with just small interaction with Server?
Q3: What should be the Application Architecture? In General?
There are a lot of reason for using PWA rather than web-apps or native applications.
A1:To answer your first question there are some articles found on internet, here I recommend some of them to read:
Google Developers PWA documentation
What is PWA?
Important tips about PWA
A2: There are no limitation and restriction on app you wanna develop. (Also you should pay attention to the key things like caching, which are considerable and important in native/web apps, too.)
A3: The architecture is very similar to web-apps except it must have some additional files:
manifest.json (Which is used to declare something like application name, the icons and etc. and it must be placed in the root of the project)
serviceWorker.js (Which gives you additional features like push notification, background works and etc.)
NOTE: As it obvious your Progressive Web App should be responsive to support different resolution of mobile screens.
PWA is not a single technology or a framework , set of features in web which helps to improve your application progressively.
It means if you have a modern browser you will get an awesome user experience else those features just not support it , your application will have existing features remains as it is.
Let's talk about what all the features we can use to enhance our existing or new web application .
You can bring native look and feel of mobile device apps on your web
pages. It's not the responsiveness of web page but you can access the
native features such as camera , accessing geo location, push
notifications.
Offline Capability when your internet connection get lost through caching.
Background Synchronization of data
Icon on the home screen , you don't need to install the application
from the app store to place it on your home screen.
There are three import things I want to summarize about the progressive web application.
Reliable : Application will load instantly even in a uncertain condition and provide offline functionality through caching.
https://developers.google.com/web/progressive-web-apps/#reliable
Fast : Respond quickly as possible based on the user interactions.
https://developers.google.com/web/progressive-web-apps/#fast
Engaging : Feels like a native app on mobile devices.
https://developers.google.com/web/progressive-web-apps/#engaging
Q1: Progressive web applications (particularly the service worker part of them) are useful because they can (a) be very fast and (b) work offline. Using a service worker to cache resources (HTML, JS, CSS) on the user's device can create almost instant page-load times on subsequent visits to your site. In addition, this can make your site available even without a network connection. Progressive web apps (with a manifest file) can also be installed on the device home screen, making them easily accessible, like native apps.
I'm not sure I understand Q2 and Q3, so I'll leave those for someone else to answer.

WebRTC video conferencing app - star topology: how to get started? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am developing a video conferencing application for education purposes that uses WebRTC. It needs to be done in a star topology as it connects up to 20 participants.
Conceptually it is easy to understand, but I don’t know how to start, as I do not have any examples.
All clients will connect to a server using WebRTC, and the server will mix the video streams in a specific layout and send it back to all clients. Here are my questions/difficulties:
How to implement the server part? What’s the best technology (e.g. NodeJS)? Are there simple examples of a star topology application like that?
How can we start writing the MCU code? Are there examples? Or is it easier to customize an open source MCU like Licode/Lynckia?
How can I estimate the right AWS EC2 instance type that we will use as the MCU server?
How can I estimate the data transfer cost (the size, in GB/TBs) which will be transferred in 1h of conference?
Thanks a lot in advance,
Carlos
My two cents on your various doubts:
Personally, I prefer NodeJS, but from what I have seen, application server does not play much of a role in WebRTC communication other than passing messages between peers/ media servers, so go with a technology you are comfortable with.
That said, for examples, you can check out kurento's Tutorials in both Java and Node.js, Licode example(using NodeJS) and Jitsi Meet in Java.
Yes, I think going with existing MCU is good idea, better one is SFU, difference being SFU justs forwards streams not mixes them, mixing streams is a costly process thus MCU needs to have high processing power. SFUs are comparitively light, all you need is a good bandwidth for the server.
About last two points, not much idea, depends on your use case, what is video resolution of streams, how many people, you needs to run some tests and guage it.
simulcast is another interesting idea, unfortunately I believe it is still in development.
We build solution based on NodeJS on WebRTC. With this technology is one big problem - everyone is sending video stream to everyone. If you have 20 participants then every computer sends and recives video 19 streams.
We created restriction "up to 4 participants" per room and it works fine.
So in my opinion if you have enought time - you can mix two technologies (up to 4-5 users) WebRTC to save server time, and something different for bigger meetings.

Is there any open source alternative to talky.io? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Is there any open source alternative to talky.io? where client code and all server side code is available.
I just double checked and it seems the priologic team are keeping tawk.com code behind a paywall, just like talky.io.
In the webRTC ecosystem, vendors either focus on
an application, and the Backend is hidden (think Skype, or bistri, even though bistri pivoted to propose an API as well.),
a PaaS and
then both the sever code is hidden and you won't have an
application, at best a demo, because they do not focus on any
vertical and do no implement business logic (think AWS in general, or tokbox, temasys for webRTC)
propose consulting/app development and will provide a complete
application most often open source, but keeping some key components
(priologic: mobile SDK + app, &yet: app, algoworks, ...) behind a
paywall. They usually team up with a Paas and or Hardware vendor to
provide more compelte/scalable solution to their client
(priologic/oracle, ...).
It is very unlikely, and I wouldn't know of any, that there were a vendor that would provide a full solution open source. It is still too complicated to have a non commercial entity provide one. The ones which do provide a full solution do so for a limited scope.
In any case, an application is always focussed on a use case. Even though the underlying infrastructure might be the same, and the BE/client API might be the same, an app for contact center, and app for social dating, and an app for conferencing will be quite different because they implement a quite different business logic and address quite different market. It is reasonable NOT to expect a full stack, but to have only the top most layer left to implement.
I put a list of vendors and products there, but it s a little bit raw. So here is a recipe to build a free/open-source solution, and then where to look to upgrade:
mandatory: open source signaling server (easyrtc, signalmaster, peerjs-server, rtc.io, ...)
mandatory: BE API (easyrtc, simplertc, peerjs, rtc.io respectively)
optional but highly recommended: add the free turn server rfc5766, or the most advanced version "coTurn". Some of the open source server and library propose examples or how to to support this TURN server by yourself.
optional: a client API that brings you closer to your use case,
optional: a free plugin to support IE and Safari (temasys free plugin),
optional: a media server if you need to host many-to-many calls or conferences (MCU or SFU) (licode, meteecho's janus, medooze, kurento, jitsi's videobridge)
optional: a SIP gateway to connect to VoIP and/or pone by extension (PSTN).
and ... that's about as far as you can go with open source / free libs today. You might hit a scalability problem quite fast depending on your traction.
A next step would be to get hosted servers, but it's not free anymore.
Separate servers:
ICE/TURN/STUN: See xirsys/twilio for hosted solution,
Media server: see dialogic, radysis, for hardware and meedoze, Jitsi, acano, pexip, openclove for software/hosted solutions,
Full Paas including all of the above:
tokbox (beware of streamed minutes billing if you have large conferences, has recording and some features temasys does not have yet)
temasys
some of the media server vendors also market themselves as PaaS. I have not tested, so I can't comment or recommend.
If you want to connect to SIP/phone, you will need different vendors as hither temasys nor tokbox provide interoperability today.
You could have a look at jitsi https://jitsi.org/, which is an opensource solution for private communication and also serves as a video conference tool for the browser.
You could try Subrosa (latin for "under the rose"). According to https://subrosa.io/source: "The Subrosa client and server are both open source and licensed under GPLv3."
Would be better if the server component was GNU AGPL 3.0, to make sure anyone running a server makes their code changes available for re-use, but at least both ends are free code.

Asterisk + Node.js + Browser Streaming [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I would like to build a service that allows a user to listen to a call live from their browser.
I have some experience with Asterisk and this seems to be flexible enough to do what I have described.
Node.js sounds good because it is purported to handle concurrency well, and, I like JavaScript.
In the browser I figure that the HTML5 audio tag, since it handles playing from a streaming source, would be fine to play the sound.
A colleague of mine worked together a demo of this concept using Icecast, but was not able to finish it. There were also significant latency isssues.
My question is this:
How should I go about getting started on this?
Any help is appreciated!
Update:
I found a presentation discussing implementing SIP on top of WebSockets via a SIP proxy on the backend:
http://sip-on-the-web.aliax.net/
Once I have this up and running, the next step would be implementing the streaming. It seems like I should be able to proxy the audio output that would normally go to the sip client, through a secondary server that then streams it to the browser. I wonder why this couldn't be done all in memory? Then there is no need to write and read the file as the call proceeds.
If you're willing to wait for Asterisk 11, we're currently working on implementing support for WebSockets directly in Asterisk. More on it here:
Asterisk 11 WebRTC/RTCWeb Support
I'll just quote Kevin here, since he summarizes it better then I can:
"Today, the in-progress development branches have support for the WebSocket transport protocol (used for communicating signaling messages between the browser and Asterisk), SIP over WebSocket (currently being standardized by the IETF) and ICE/STUN/TURN (media handling mechanisms for NAT traversal and connection setup security). In addition, there’s a new Jingle/Google Talk/Google Voice channel driver, and we plan to support Jingle over WebSocket as well. At this point, we don’t have a quite complete solution (a new Canary build of the Google Chrome browser is needed with a few small changes), but each of the pieces has been tested and we’re anxious to see it all work together. We’d like to thank Iñaki and José from the SIP-on-the-Web project for providing us their JavaScript SIP stack to use during our testing, and we’ll probably be testing with the PhonoSDK as well for Jingle support."
This seems a nice guide
Remote call-center solution using Node.js
I've built a similar solution here. In this post I'm talking a little about it:
http://www.igorescobar.com/blog/2014/08/13/working-with-asterisk-and-node-js/
I built a call center solution using Node.js (Express/Socket.io), Javascript, HTML5 and CSS3.
I think trying to stream an audio file while it is being recorded will have extreme latency issues that you will not be able to get around. If you want to get real-time listening to a phone conversation I would suggest looking into Phono. It is a JQuery plugin that turns your web browser into a phone. Then you would just have the listener conferenced in to the conversation with it on mute.
If you don't mind the latency (caused by buffering of the Icecast stream), Asterisk is able to stream to Icecast (configure Asterisk's Ices application).
If you can't tolerate the latency, you'll need a browser-based SIP client. Unfortunately, there's not many of them that aren't locked to someone else's phone system. You might try red5phone (http://code.google.com/p/red5phone/) but that requires that you set up a Red5 server.

How to secure an Internet-facing Elastic Search implementation in a shared hosting environment? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been going over the documentation for Elastic Search and I'm a big fan and I'd like to use it to handle the search for my ASP.NET MVC app.
That introduces a few interesting twists, however. If the ASP.NET MVC application was on a dedicated machine, it would be simple to spool up an instance of Elastic Search and use the TCP Transport to connect locally.
However, I'm not on a dedicated machine for the ASP.NET MVC application, nor does it look like I'll move to one anytime soon.
That leaves hosting Elastic Search on another machine (in the *NIX world) and I would probably go with shared hosting there.
One of the biggest things lacking from Elastic Search, however, is the fact that it doesn't support HTTPS and basic authentication out of the box. If it did, then this question wouldn't exist; I'd simply host it somewhere and make sure to have an incredibly secure password and HTTPS enabled (possibly with a self-signed certificate).
But that's not the case.
That given, what is a good way to expose Elastic Search over the Internet in a secure way?
Note, I'm looking for something that hopefully, will not require writing code to provide shims for the methods that I want (in other words, writing forwarders).
A plugin for elasticsearch that allows you to replace the HTTP transport with an embedded instance of Jetty is now available.
Because it uses Jetty to handle the HTTP transport, it can handle SSL connections as well as be configured for authentication.
(Note, the following is still sound advice, in that it's generally good practice to abstract your operations out in this manner)
After a number of discussions on the ElasticSearch mailing list, I've discovered that the current solution is to host ElasticSearch behind another application layer and then to secure that layer.
The reasoning is solid; ElasticSearch is akin to a database, and you wouldn't make your database public-facing to all.
Something that I (and others) trip up on is that because ElasticSearch uses HTTP as a transport and uses JSON as the syntax for operations, that ElasticSearch is meant to be public-facing.
However, there is currently a request to add HTTPS transport support (assuming a certificate is provided) along with basic (digest) authentication.
You'll have to firewall the machine in some way, permitting only the traffic from the appserver, e.g. using iptables on linux, or some kind of personal firewall on windows.
This takes you into serverfault.com territory, though - there isn't a programming solution to this one.

Resources