How does Rabb.it stream broswer windows for users - linux

I am just wondering how rabbit is able to give each user a diffent broswer to use from a linux machine it seems like vnc tech but I dont know please let me know if you know how they are able to do that.

There is a somewhat detailed blog post about what their architecture was here: https://bloggeek.me/rabbit-webrtc-interview/
I will quote the relevant part for longevity:
We have two main stacks, one for audio/video and one for our business
logic:
Our audio/video stack is built in Java on top of Netty:
Our SFU allows us to use WebRTC with much larger groups than the
normal use case. For our shared viewing feature (called Rabbitcast™),
we had to build a native extension to capture and delivery an HD
stream with audio from our virtual machines. Both of them use our own
WebRTC server stack to talk to the clients. Our Business Logic stack
is built on top of Node.js using a promise-based approach to keep our
sanity.
Lastly we use Redis both for intelligent caching and pub/sub. MongoDB
is our persistent storage.

I am not sure about what exactly they are using but I have some ideas how it works yeah as you already said they are using virtual machines that ported to a html5 vnc client to control and stream video and audio. Other options might be using xpra,x2go,apache guacamole to port them into a html5 client again.

Related

Web-application design on embedded linux

I am working for a lighting automation company and we will design and develop a product which will implement Yocto/ Buildroot embedded linux operating system.
We will use a Linux SoM inside the product and the specs of the SoM is ~:
1.2/1.5GHz MPU
128/256MB RAM
4/8/16GB eMMC/SD
various peripherals UART, SPI...
At this point, Linux side must implement a Web-Based App, which monitors luminaires and control them etc. In general, project intends to control the lighting of a building/home using the web-app running on the device. Front-end shall show each luminary on the page and relevant buttons and icons help client control and monitor the luminaries. The front-end may have a couple of different pages. Overall there can be max of 250 luminaries and 10-bytes of data for each luminary.
I will have an MCU running beside which does real-time stuff and connected to Linux SoM using UART. The real-time MCU communicates to the luminaries and sends their data to Linux through UART or vice versa. The web-app should start a web-server I guess so that client can connect to the app from his/her PC/Smartphone browser. I also think I will need a database, because device should retain the data once restarted or in case of a power failure.
At this point I am not sure what kind of design should I do. I do not want to create a complex application. I do not want to do over-engineering. We are currently 2 embedded guys and 2 software guys will join us soon. I am an embedded C/C++ guy and although I know how stuff works in a very general sense for Vui.js, React.js etc. I am not really sure how well they will do on embedded linux with restricted sources such as RAM.
I have 3 different designs in my head:
1st ->
Receive data through UART directly using a high-level
language inside web-app backend (Node.js, Flask or ??? if possible)
Web-app backend (Node.js, Flask etc. or ???) either writes
received data to a database (SQLite ??) or executes it directly in a
proper way
Front-end communicates to backend through REST APIs
(Vue.js, React or ???)
2nd ->
Receive data through UART with a plain C executable file (circular buffer etc.)
Web-app backend (Node.js, Flask or ???) receives data through a local socket from
the C file and does database operations etc.
Front-end communicates to backend through REST APIs (Vue.js, React or ???)
3rd -> If flask, vue.js etc. complicates the Linux applications
Receive data through UART with a plain C executable file (circular buffer etc.)
Use lighttpd or similar to start a web-server and use fast-cgi ?
As far as I learnt from the web, with the specs of the SoM I will use, technologies such as Node.js Vue.js can be handled easily and there should be no problem at all. If so, even though it is a quite general question, how to do it in a simple & modern way?
I think the best way is the first.
In this way you build all the system with module so in the future will be easyer to change something.
All the framework you will use is maintained by big company so will live for longer

how to communicate with my apps using IP address and socket remotely

I am to Electron and nodejs
And stuck here, actually i m making a desktop app to control every PC in network. which tells me the ip and mac of computers in the network. But now I need to talk to it and push/get some message. but how ?
Socket.io is likely the easiest way to do what you are trying to do. It'll allow you to communicate between the machines with a relatively low amount of effort.
Sockets generally work on a "server" and "client" basis, so you may want a central server that will coordinate with the clients.
This blog post from node source provides a really good intro to using them.

opensips open ims and asterisk configuration for audio video calls on Ubuntu?

I am not sure if it is a correct place for such question but unfortunately I did not find any other stackexchange site to ask this question. But I have read some similar question here like on Open IMS and on Asterisk.
My Question is, I want to make audio and video calls on my Linux based Local network(Ubuntu 12.10 based). By googling a Lot and studying I found that Open IMS, Asterisk and Open SIPS do what I need. But I am not sure what they are? and whether they fulfill my requirements? How to configure them to make audio video chat system on LAN?
Please Help
You can configure an openims server and then use a sip client to register with it. Doing so you can make audio and video calls to another sip client.
openims configuration can be done on an linux based system.
i will describe in simple words here, although it is old thread but for any body searcing in future.
OpenSips can work as a proxy sip server, which is used for scaling purposes. But it does not have media server, which means it can route calls from A to B but can not do codec conversion or IVR for example and it is efficient, an alternative brother is Kamalio, and opensips has a web interface to configure.
Generally speaking, many people use Opensips -> Asterisk or Freeswitch.
If you want to scale more, HA Proxy is an option.

Listening a particular port on linux to access data comes from mobile device

i am newbie to Linux platform, i am working on java technology.
what i have to do is : Having a program that running on mobile devices,that sends some data to my Linux machine, now i have to create a program in java that
listen to a particular port.
access data comes on that port(which is sending by mobile device)
save that data to the database.
response back to the mobile device.
i.e. i would make my Linux system as server that can listen from many clients(mobile devices), but not getting how to configure this environment... :(
i used cent OS 5.4 and
installed jdk1.6.0_24
any help would be appreciated.....
thanx in advance!
khushi
One of Java's greatest strengths is that you can pretty much ignore the host operating system as long as you stick to core Java features. In the case you're describing, you should be able to accomplish everything by simply using the standard Java networking APIs and either the JDBC to access an existing, external database or you could choose any number of embedded Java databases such as Derby. For your stated use case, that you'll be running the application on Linux is pretty much irrelevant (which should be good news... you don't need to learn a whole operating system in addition to writing your app ;-).
Here's a nice client/server tutorial, in that it is broken into steps, and adds each new concept in another step.
Here's another client/server tutorial with much more detail.
I would write it to accept one connection at a time. Once that works, I would study the new(ish) java.lang.concurrent classes, in particular the ExecutorService, as a way of managing the worker bee handling each connection. Then change your program to handle multiple connections using those classes. Breaking it up in two steps like that will be a lot easier.

Cross platform multimedia kiosk

My team is tasked with building a full screen, kiosk-style application for playing back media files. Initially we need to support WMV / MP4 as well as some images in full 1080p, although down the line we will need to extend this to cover other formats (different videos formats as well as display of HTML, SWF, etc).
The application also contains a decent chunk of business logic relating to scheduling, logging, performance monitoring as well as network code to talk to a central server through web services (or maybe TCP) and potentially act as a server itself.
For our WMV / MP4 video playback, hardware acceleration will be a massive bonus. The targetted hardware has weak CPUs but strong graphics cards.
Here's the kicker: we're a .NET shop (our existing application is a WinForms smart client) and extremely experienced and productive in C# and the .NET stack. The app will initially be targetting Windows Embedded (.NET 3.0), but we will quickly need a Linux version as well. Between us we have some C/C++ experience and some Linux experience but we do not anticipate good productivity on that platform.
So I am soliciting recommendations specfically on the following points:
Video. On Windows we have seen good success using DirectShow.NET. On capable hardware, the WPF MediaElement also seems to perform well. What should we be using on Linux? libavcodec seems like a common choice. Is it hardware accelerated on NVidia graphics cards on Linux? What other options do we have on Linux? Is there something cross-platform that I could consider?
Stack.
a) Ideally we could write the whole thing in .NET and then run under Mono on Linux. The video playback and presumably some other components (like performance monitoring) would not be supported on Mono. I guess we could rewrite these elements in, say, C++; but I'm guessing that most stuff on the business logic side would work.
b) Maybe it's better to forfeit our up-front productivity on the Windows version for something that's cross platform out of the gate. What about Java? Do we have different options when it comes to video there? How about another framework? Something like QT? Can anyone else suggest something cross platform that would be relevant?
Broadly speaking, given the requirements, what would you use?
I appreciate any anwsers you might have.
My suggestion is that you use Fluendo's GStreamer components for the video playback as it has support for hardware acceleration where available and fully licensed codecs.
You can look at the Banshee media player which support video playback if you have the Fluendo/GStreamer packages installed. Get OpenSUSE 11.2 which contains everything you need to try it and develop, and then buy and install the Fluendo codecs.
Source code wise, Banshee does the video display from C#, look here:
The C# source code consuming GStreamer and doing the video rendering is here:
http://git.gnome.org/browse/banshee/tree/src/Extensions/Banshee.NowPlaying/Banshee.NowPlaying
The C supporting library to call into Fluendo is available here:
http://git.gnome.org/browse/banshee/tree/libbanshee
For testing Banshee, you do not need to buy anything, but your video codecs will be limited to Ogg/Theora encoded videos. Once you get Fluendo's codecs you will be able to play WMV files.
One option would be to use Silverlight, and explore Moonlight as an option for the linux version. My understanding is that Moonlight has several media/codec plugins (I believe ffmpeg is the main provider) and can additionally use the MS codec pack to give you support for things like WMV/MP4.
You can use ffmpeg in mono and .net. This may or may not include video display - ffmpeg usually just provides you with a decoded bitmap that you can do whatever you want with, be it display it in a window, save it in a file, whatever. If you use ffmpeg-sharp the same code should work on Windows or Linux. Really, putting the bitmap in a window is the easy part.
Moonlight offers two codecs: (a) A fully licensed version that comes straight from Microsoft and requires no further negotiation with the MPEG-LA and other patent holders, or (b) an ffmpeg backend that requires you to negotiate with the patent stock holders if you plan on using.
You could build a Silverlight-based application, the trick to get access to the local system is very simple: you run a local web server that exposes those services.
You can still use C#/Sqlite or VistaDB as your storage system as part of your Silverlight application.
You could host the silverlight app in http://localhost/App.xap and this app would gain local access to the machine by contacting a REST or SOAP web service on http://localhost/rest.ashx or http://localhost/soap.asmx
For example, if you needed to read some values from a scanner connected to the machine, you would issue this request:
http://localhost/scanner.ashx?operation=scan_badge
Then your scanner.ashx HttpHandler will do the actual scanning (this one has full system rights) and return the value to the Silverlight application.

Resources