I have 2 different Unity apps and i wish to connect them. My Aim is to stream a live video from App 2, to app 1. therefore i want app 2 to act as the server and sender, and app 1 to act as the receiver and show the video in a panel or rawimage. The 2nd app will run on IOS or android and i access the Native camera, in the finale product i wish to have whatever is visible in app 2 to be visible in app 1 image. only the image. What i have achieved so far:
I have actually finished all about 90% of both apps and this is the last step i need. When it comes to servers and networking i just don't have the required knowledge. Can someone tell me how can i do this? How can i stream a live video from one unity app to the other. Note: The 2nd app will be on ipad or any android phone while the 1st app will be running on a normal desktop. Regarding the networking part, should i use nodejs, websockets? Unity netoworking? What? Even if i know what to use, how shall i stream the thing. this is the first project i do that has networking and servers in it and i really don't have the experience needed for this. Any help is much appreciated
Rather than try to stream the app it might be easier to use sockets or unity's multiplayer services and recreate the relevant assets client-side and have the server send position updates.
In terms of what to use Unity networking looks like a good choice. When I've done unity client-server stuff before I've done it with sockets and that works okay especially if you don't have too many things to keep track of.
Related
Hello, I am looking for a way to forward my live stream from my server to another server, for example, Facebook via rtmp.
the structure would be something like:
My cam -> my server -> other server rtmp -> viewers
My intention is to capture the transmission and forward it to many rtmp servers to consume the server's resources and not the client's resources, I don't have much knowledge in video transmissions, if it is possible to do it via nodejs it would be great, thanks
I have searched for SFU and other ways that are possible, but I want to have several alternatives and find the most ideal to implement it in production
I never did it myself, so I can't recommend the best way to do it.
After some research, if you want to stay with nodejs, I personallly recommend Mediasoup.
It is a powerfull SFU developed in c++ which provided really good bindings with nodejs. All the heavy process is done in c++ and the nodejs API call a child process where the c++ mediasoup worker runs on it. You only have to care about the nodejs API nothing else.
With mediasoup it should not be too difficult to get your stream on the nodejs server.
After that, for transmitting you stream to a rtmp server, it seems you can call ffmpeg in a child process to transfer it from your nodejs server to a rtmp server.
I found two github projects with this kind of approach.
The first one is a bit outdated, using an old mediasoup version but maybe you can find something interesting. Specially for the client/browser part, you have an HTML file that should be helpfull. Be aware the API for Mediasoup may have changed, both the front and the back.
EDIT : The first project does not use Mediasoup client library, you can look at it here
The second is more recent and really seems to match your need, maybe you will need some cutomization. But they don't provide any front end part.
For mediasoup, you will find a lot of ressources over the internet, github, youtube for the client/server part.
If you want to look at it, the installation guide for the Mediasoup v3 (last) version. You have to install a python specific version and set few environment variables. After that you can install the npm package and happy coding !
It is easier to install on linux, so if you are on windows, preferably use WSL2 for testing. I don't know anything about Mac, but I know docker is possible, so should be good too.
A lot simpler option to stream your webcam to other servers will be to use OBS studio, but you must have already considered it
They have a plug in that permits to send your stream to multiple platform at once, looks really cool ! Here
Hope it can give you some more options !
I am looking a solution for Video Chatting in Xamarin forms backend Azure. Azure currently not supporting WebRTC. So I plan to do Create 2 live streaming channel for the users. Take one end camera for one live streaming channel and same for another end user. Before I am doing this test, I want to know it will work or not and performance wise good or bad?
Or I can go with signalr?
Unfortunately, I think neither Azure Media Services, nor SignalR will give you the low latency you need for a live video chat application.
I think that your best bet when running on Azure, will be to grab a Virtual Machine and install a 3rd party product such as:
Kurento
jitsi
Wowza (which I think also offer their product as a SaaS)
Any other product you might find
Hope it helps!
I've previously built chat servers using NodeJS (i.e. central chat server with clients, no p2p), with Electron, or just good old Express. I'd like to re-use as much of my old code as possible. Thus, the only missing piece of the puzzle for me is what to use to enable both public and private video/audio streaming. File sending isn't necessary.
Is there anything out there I can 'easily' drop in to this model? I'm aware of Kurento and a few similar offerings but these feel like overkill for how I'm hoping to work.
update: Given a few suggestions about WebRTC, which I'm open to, but plans for this app include automated moderation/content filtering of any video broadcasts and text. So I assume such a solution would need to either treat the server as a 'hardcoded' peer somehow so that it's fairly safe to assume it will see a copy of anything sent over the public chat network. Of course, for private communications this need not be the case. On the flip side, worst case, operating in a spoke topology is fine too.
You can start with a WebRTC samples
https://webrtc.github.io/samples/
WebRTC is kind of standard now for audio/video calls. It's all work p2p with no server interaction.
The only one thing you need to build is a signaling protocol to connect 2 users. For this you can use/extend your nodejs app chat.
We have a iOS native app client making calls to a Bluemix speech2text service using Websockets in Direct interaction mode, which works great for us (very fast, very low latency). But we do need to retain a copy of the audio stream. Most audio clips are short (< 60 seconds). Is there an easy way to do that?
We can certainly have the client buffer the audio clip and upload it somewhere when convenient. This may increase memory footprint, particularly for longer clips. And impact app performance, if not done carefully.
Alternatively, we could switch to using HTTP interface and relay via a proxy, which could then keep a copy for us. The concern here (other that re-writing an app that works perfectly fine for us) is that this may increase latency due to extra hops in the main call thread.
Any insights would be appreciated.
-rg
After some additional research we settled on using Amazon S3 TransferUtility Mobile SDK for iOS. It encapsulates data chunking and multi-threading within a single object, and even completes transfer in the background after iOS suspends the app.
http://docs.aws.amazon.com/mobile/sdkforios/developerguide/s3transferutility.html
The main advantages we see:
no impact on existing code--simply add a call to initiate a transfer
no need to implement and maintain a proxy server, which reduces complexity
Bluemix provides cloud object storage similar to S3 but we were unable to find an iOS SDK that supports anything other than a synchronous, single-threaded solution right out of the box (we were initially psyched to see 'Swift' support, but that has proven to be just a coincidental use of terms).
My two cents....
I would switch to the HTTP interface, if you make things tougher for your users, then they won't use your app and will figure out a better way to do things. You shouldn't have to rewrite the app - just the communications, and then have some sort of server side application that will "cache" those audio streams.
Another approach would be to leave your application as is, and just add a step to send the audio file to some repository, AFTER sending it to speech to text, in a different thread. In this case you could save off not only the audio file, but the text translation as well.
I am planning a project CCTV system using Node.js and openCV, WebGL.
Would you please take a look at my plan and find flaw or give me advice?
My plan is: Entire system consists of 3 types of host, CCTV-server-watchmen. Numbers of each host may be (more than 10)-1-3? CCTV take a video and send it to the server. The server identifies persons in the video, and analyzes who this person is and where he or she is(using OpenCV). Finally, watchmen can seize entire status of field he or she manages(map drawn by webGL helps it). I will use node.js as network method.
I have a few issues about my plan.
Is it efficient to use Node.js as video data transmitter?
Basic concept of Node.js is single-thread, so maybe large data like video does not fit to it. But, count of CCTV and watchmen is limited and fixed(It is system for closed intranet)
Is there any method can replace Node.js?
I will not replace openCV and WebGL. But Node.js could matters. At the beginning of planning, I was finding other means for networking between C/C++ program and web-browser. Honestly, I got failed at school-project last year. One of problems that I can't find solution was "How to send/receive data between C program installed at Raspberry Pi and web Browser". I chose Node.js as method this project, but also heard other means of Qt, DB, CGI. Is there a better way?
Thank you for reading it.