I'm playing around with Flash Media Server, and got it streaming my webcam live through the browser.
I will like to be able to stream live on the web. However, I'm confused as to how the process works on the web. On my localhost, all I had to do was install the executable file, use rtmp://localhost/live as the source, then create the swf file and HTML to broadcast it.
My questions are these:
I already have a Virtual Dedicated Server, and unless it can't be done, I don't want to purchase a Flash Media Server account with another FMS host. Do I have to install FMS on my server?
If yes, how do I install it on a Linux host?
Are there extra steps that I need to take, in order to make it work just on my localhost?
Thanks.
The problem has been resolved; I simply purchased an account on a site offering a Flash Media Server services, uploaded the files, created the logic using ActionScript3.0; everything's fine at that point.
Related
Webpagetest offer a hosted service, but they dont support the countries I need to test.
I was going to setup WPT on a VPS linux server in the target country, but I cant find any installation instructions for the "client", only the server. There are some tutorials which use a local PC as a client, but this is not an option for us.
The question is can a linux server be both the server and client (so no other SW is needed) to test a page?
Found the answer here:
https://github.com/WPO-Foundation/wptagent/blob/master/docs/install.md
It DOES support linux as agent (as well as server).
I am just wondering how rabbit is able to give each user a diffent broswer to use from a linux machine it seems like vnc tech but I dont know please let me know if you know how they are able to do that.
There is a somewhat detailed blog post about what their architecture was here: https://bloggeek.me/rabbit-webrtc-interview/
I will quote the relevant part for longevity:
We have two main stacks, one for audio/video and one for our business
logic:
Our audio/video stack is built in Java on top of Netty:
Our SFU allows us to use WebRTC with much larger groups than the
normal use case. For our shared viewing feature (called Rabbitcast™),
we had to build a native extension to capture and delivery an HD
stream with audio from our virtual machines. Both of them use our own
WebRTC server stack to talk to the clients. Our Business Logic stack
is built on top of Node.js using a promise-based approach to keep our
sanity.
Lastly we use Redis both for intelligent caching and pub/sub. MongoDB
is our persistent storage.
I am not sure about what exactly they are using but I have some ideas how it works yeah as you already said they are using virtual machines that ported to a html5 vnc client to control and stream video and audio. Other options might be using xpra,x2go,apache guacamole to port them into a html5 client again.
For some time now I've been trying to send files to a Embedded Linux device via FTP without success. I even previously put a question in SO talking about my problem, and I still haven't got any further in solving it.
One thing I noticed, though, is that most FTP examples in the web includes a server-client relationship; the client connects itself to the server that is constantly listening in some IP-Port and the file transfer begins. Now when studying the examples using QNetworkAcessManager to send a file (generally to HTTP), they never mentioned the "other side requirements", what is leading me to believe I'm missing the necessary FTP server running in my Embedded Linux device.
So my question is more like a confirmation of my suspicions: if I want to transfer a file from my Desktop to my device using FTP, do I need to have a FTP server constantly running on that device? If yes, how that should change my code? For instance, should I abandon QNetworkAcessManager in favour of a QTcpClient usage? IOW what else should I know to make the file transfer system work using Qt? (In fact should I even bother myself with FTP at all instead of just using a normal QTcpServer?)
FTP is a protocol with 2 parties, the client and the server. Both must comply to the specification of FTP before file transfer can take place.
So yes there has to be a FTP deamon (the server) running the on the other device.
It doesn't have to run constantly just whenever you want to transfer files.
I want to create a webpage that will access the USB port of the client. Intent is to configure the hardware connected the USB port. I can do a desktop application because the configuration option is different for different hardware. connected and I need to pull this code dynamically from the server. I am not a web programmer. It will great to find the best way to do this.
It ends up that I am attempting to write an app that performs something similar. What I am doing, instead is writing both the web server and the web page. Use something simple, like DLib for the web server, to serve the data to the end user.
This is how it works:
The web server handles the USB connection. If written in C++ or some other native language, you will have much more control over the device. The web page is then loaded from the web server that you have written. In the web page, you can have some sort of javascript worker, etc. to constantly pull new data from the server and push data from the web interface to the USB device. This also adds a layer of protection because you can ensure that the user has not made any modifications to the web page.
The main drawback to this possibility is that you will be required to install the server on the client's machine. However, this can be circumvented by writing this as a applet that can be embedded within the page!
It is possible to write a browser plugin that communicates with USB devices. An example of an app that does that is MyTrezor.com, but unfortunately I don't think you can see the source of their plugin.
Another option might be to use the chrome.usb or chrome.serial Javascript API, but this means your app would only work in Google Chrome, and it would have to be installed as a Chrome packaged app, a special thing that looks more like a native app than a web page.
For a long time now I have been using a local XAMPP installation on my OS X machine for all my web development. Because updating/maintaing XAMPP is such a pain, I set up an Ubuntu server for my web development.
I would like to know what you think is the best/easiest way to connect to your main development server to edit the files. What protocol do you use (smb, webdav, fdp, ldap, etc.)? Also, do you leave the files on your machine and let the server read the files form your hard drive (e.g. smb via a smb) or do you leave the files on the server?
I would go with SMB as your means of file transfer. How you do this is up to you. It depends on how often your files are accessed, how often they are updated, etc. If you plan on updating the files often (i.e. if you are in a rapid dev phase) then you can link them like you talked about. If the updating is infrequent and the amount of requests are high, upload them to the server. This will decrease the amount of stress on your LAN as the files are requested; in the other method the route would have been modem -- SMB server -- SMB share -- SMB server -- modem, wheras this way it is modem -- SMB server -- modem.
I use an Ubuntu Virtual Machine running the web server, git and vim. So I backup everything my Vim configuration and server config. For me is the fastest way to recover from a crash in example.
Also, you can use vim through ssh by
vim scp://myuser#server.com//home/myuser/file
A simpler example is to view source with an editor syntax, indent
vim http://domain.com
You can save ssh credentials too
I normally use Aptana (an Eclipse derivative) over ssh/sftp to edit the files directly on my server.
If you need to transfer files I suggest using something like FileZilla which will let you connect over ftp or ssh/sftp.
I used to map a SMB share of my LAMP server and edit the PHP files directly with Dreamweaver. Worked really well.
Lol, i'm the first one in the testimonial here. Oh memories.