I am planning to write a nodeJS server for streaming videos, One of my critical requirement is
to prevent video download( as much as possible ), something similar to safaribooksonline.com
I am planning to use amazon s3 for storage and nodeJS for streaming the videos to the client.
I want to know if nodeJS is the right tool for streaming videos( max size 100mb ) for an application expecting lot of users. If not then what are the alternatives ?
Let me know if any additional details are required.
In very simple terms you can't prevent video download. If a rogue client wants to do it they they generally can - the video has to make it to the client for the client to be able to play it back.
What is most commonly done is to encrypt the video so the downloaded version is unplayable without the right decryption key. A DRM system will allow the client play the video, without being able to copy it (depending on how determined the user is - a high quality camera pointed at a high quality screen is hard to protect against (!). In these scenarios, other tracing technologies come in to play).
As others have mentioned in the comments, streaming servers are not simple - they have to handle a wide range or encoders, packaging formats, streaming formats etc to allow as much each as possible and will have quite complicated mechanisms to ensure speed and reduce file storage requirements.
It might be an idea to look at some open source streaming servers to get a feel for the area, for example:
VideoLan (http://www.videolan.org/vlc/streaming.html)
GStreamer (https://gstreamer.freedesktop.org)
You can still use noedejs for the main web server component of your solution and just hand off the video streaming to the specialised streaming engines, if this meets your needs.
Related
I am trying to create an on-demand audio streaming platform (similar to Spotify) from scratch. It will have 1000 users (I am optimizing for time to build, not scalability as of right now).
I want to use web-based technologies ( I am experienced with React/Redux/Node). Could I get some advice on the architecture (what technologies I should use for the project)?
Here are things I need help with
What Storage service I should use for my music files (my song catalog is about 50000)
How to stream music from the storage service to each user
What server protocol I should use (RTMP/WebRTC/RTS)
(Optional) How to store data in cache to reduce buffer
I know this is a huge ask so thank you guys for your help in advance
What Storage service I should use for my music files (my song catalog is about 50000)
S3 (or equivalent).
Audio files fit this use case precisely, and you're already using AWS. If you find the cost too high, there are compatible services that are more affordable, all the way down to DIY on Minio.
How to stream music from the storage service to each user
Use a CDN (or multiple CDNs) to optimize delivery and keep the latency low. CDNs are also better at spoon-feeding slow clients.
What server protocol I should use (RTMP/WebRTC/RTS)
Normal HTTP! That's all you need, and that's all that's been necessary for decades for this use case.
RTMP is a dead protocol, only supported by Flash on the client side. Its usage today is limited to sending source streams from video encoders, and even that is well on its way out the door.
WebRTC is intended for low latency connections, like voice calls and video chat. This is not something that matters in a unidirectional stream. You actually want a robust streaming mechanism... not one that drops audio like a cell phone to keep up to realtime.
RTSP is not something you can use in a browser, and is overly complex for what you need.
Just a simple HTTP service is sufficient. Your servers should support ranged requests so that the browser can lose a connection and still pick up right where they left off, without the listener even knowing. (All CDNs support this, as does any properly configured web server.)
(Optional) How to store data in cache to reduce buffer
CDNs will generally improve performance of the initial connect and load. I also recommend pre-loading the next track to be played in the list so that you can start it immediately. In most browsers, you can actually start the next track at the tail end of the previous track for a smooth transition.
I am working on a VoD project in NodeJS which must provide customers with some videos to buy or subscribe.
Video are hosted on a Streaming Server (a server like Red5, but not exactly Red5) and provides interactive player, adaptive bit-rate streaming, enhanced speed using CDN, and etc.
The problem I have is users are able to download the video seeing they easily obtain videos URL.
According to the below question:
Is there a way a video file on a remote server can be downloaded in chunks using Node.js and piped through to a client, without storing any data on the server, …?
Request NPM has been suggested.
Now my questions are:
Is the suggested solution a wise decision to adapt for my scenario?
Following suggested solution would it be possible to use server's provided features like adaptive bit-rate streaming, ...?
You may also encrypt each segment with AES to prevent copy.
We have a iOS native app client making calls to a Bluemix speech2text service using Websockets in Direct interaction mode, which works great for us (very fast, very low latency). But we do need to retain a copy of the audio stream. Most audio clips are short (< 60 seconds). Is there an easy way to do that?
We can certainly have the client buffer the audio clip and upload it somewhere when convenient. This may increase memory footprint, particularly for longer clips. And impact app performance, if not done carefully.
Alternatively, we could switch to using HTTP interface and relay via a proxy, which could then keep a copy for us. The concern here (other that re-writing an app that works perfectly fine for us) is that this may increase latency due to extra hops in the main call thread.
Any insights would be appreciated.
-rg
After some additional research we settled on using Amazon S3 TransferUtility Mobile SDK for iOS. It encapsulates data chunking and multi-threading within a single object, and even completes transfer in the background after iOS suspends the app.
http://docs.aws.amazon.com/mobile/sdkforios/developerguide/s3transferutility.html
The main advantages we see:
no impact on existing code--simply add a call to initiate a transfer
no need to implement and maintain a proxy server, which reduces complexity
Bluemix provides cloud object storage similar to S3 but we were unable to find an iOS SDK that supports anything other than a synchronous, single-threaded solution right out of the box (we were initially psyched to see 'Swift' support, but that has proven to be just a coincidental use of terms).
My two cents....
I would switch to the HTTP interface, if you make things tougher for your users, then they won't use your app and will figure out a better way to do things. You shouldn't have to rewrite the app - just the communications, and then have some sort of server side application that will "cache" those audio streams.
Another approach would be to leave your application as is, and just add a step to send the audio file to some repository, AFTER sending it to speech to text, in a different thread. In this case you could save off not only the audio file, but the text translation as well.
i'm little curious about 'How live streaming web application works'. Recently I want to built something like a online radio that can perform live stream through all the client, like music, speech etc. I'm quite familiar with Java Spring MVC and Node.js . If there are some resource using thease above technology, it would be really helpful for me to see how it works. Thanks in advance.
There are two good articles about it:
Streaming Audio on the Web with NodeJS
Using NodeJS to Stream a Radio Broadcast
You may also find this module helpful:
https://www.npmjs.com/package/websockets-streaming-audio
The best way to do this is use Node.js as your source application, and leave the actual serving of streams to existing servers. No reason to re-invent streaming on the web if you can get all the flexibility you need by writing the source end.
The flow will look like this:
Your Radio Source App --> Icecast (or similar) --> Listeners
Inside your app itself:
Raw audio sources --> Codecs (MP3, AAC w/ADTS, etc.) --> Icecast Source Client
Basically, you'll need to create a raw PCM audio stream using whatever method you want for your use case. From there, you'll send that stream off to a handful of codecs, configured with different bitrates. What bitrate and quality you use is up to you, based on the bandwidth availability of your users and tradeoff with quality you prefer. These days, I usually have 64k streams for bad mobile connections, and 256k streams for good connections. As long as you have at least a 128k stream in there, you'll be putting out acceptable quality.
The Icecast source client can be a simple HTTP PUT these days. The old method is very similar... instead of PUT, the verb was SOURCE. (There are some other minor differences as well, but that's the gist.)
I'm going to publish a video in a Web page for streaming. I expect having more than 100.000 visits per day in a month. I want to upload my video to a server (or service) that offers the same band-with for all the clients, even if there are hundreds of thousands of clients connected simultaneously.
I will connect the player with the external video.
Note: I cannot use Youtube or Vimeo because the video is 360º technology, so I need to use my custom player.
Please, could you suggest any service that offers this feature?
Thanks!!
I would say this is mostly a question of the streaming technology you'd like use but not the storage alone.
E.g. if you wish to stream via some binary protocol like RTMP, you'll have to use software like Wowza for transcoding and delivery. Hence the load balancing for proper usage of bandwidth will also be served via load balancer like Wowza.
So you should decide what protocols and other technologies you plan using. This will narrow your search parameters.