What's better for Video and Real-Time Control - ZMQ or Websockets? - security

I am writing a remote server to control a robot with. The robot provides video and its current sensor state; the server sends control commands.
My choices for sending the frames and the control/sensor-state between the robot and the server ( 2-way ) are ZeroMQ and WebSockets.
I need:
Speed
Security
My coding partner wants to use Websockets because it's undergoing standardization, but I have 3 months of experience using ZMQ to do just what we're trying to do, so I'm fairly certain the choice doesn't matter.
However, I'd like to know if anyone can think of a compelling reason to go with one OR the other (XOR). We're not going to use ZMQ+Websockets because we don't need to.

Looking at what WebSockets are, I honestly don't think it's going to make much difference. They're simply a way to switch between speaking HTTP to the WebSocket framed message protocol down the same TCP connection. ZMQ gives you framed messages too, but you'd be using that over a network connection separate from the web browser's HTTP connection.
Latency comparisons are going to depend on just how good a run time environment the Web Browser provides. It seems to me that use of WebSockets will involve writing the client side code in Javascript and running that in the browser (the "modern" way), so that code will be at the mercy of the Web browser's Javascript engine (they're pretty good I think).
With ZMQ you may be having to write a native application for the client end (I don't know if it can be used within a web browser within Javascript - I need some education!). A native application is free of any influence from a web browser, so it might be just a shade better.
But if your real time requirement is only on the human scale (i.e. it need respond only quick enough to make a human happy), I think either will likely be sufficient. Neither can overcome propagation times across the Internet, neither can account for OS / browser delays.
The one difference is that with WebSockets it looks like you have to switch between HTTP and WebSocket protocol. So if you need to switch back to HTTP to load some web element or other, that's going to interrupt the flow of WebSocket data until you switch back again. Whether or not that is actually a problem is going to depend very much on what your client side application is doing (for example, you may very well be talking to a separate web server for web page elements, in which case you'd have two connections on the go anyway).
With ZMQ you're going to have a dedicated connection.

Related

Tradeoffs of browser-based development vs. Smart Client

I've got an app that's been started on the Microsoft stack as a smart client (notionally WCF/WS enabled) with a small client app that gets deployed and the rest of the app running in our private cloud. It's only real dependency is internet connectivity, .net 4 and a windows operating system.
I am under pressure to convert over to a browser based architecture for all future development. Based on other web apps I've worked on, I'm concerned that the way that client IT organizations can control the browser, it will cause more problems down the line than what I really want to deal with.
Do you have experience making this kind of decision? What technical factors did you consider when deciding to go smart-client vs. browser? What resources were helpful in making this decision?
My app is a healthcare app targeted at healthcare providers (eg. hospitals), so everywhere I go, I have to worry about the Healthcare CIO looking over my shoulder.
Interesting. Originally I'm from C# winform and WPF Desktop programmer, and later being assigned to do web development. Haven't touch Smart Client yet but I think it should almost be the same with Native app. Based on experience, the technical things to consider are:
Multi browser support
Especially for reporting and graphic processing, without some library / plugins / framework for your component, it will be insanely hard to keep your app multibrowser. Especially in css style and less in javascript.
Client programming(javascript)
You will lose the ability to create controls and animation using C# controls. Instead you must using javascript (jquery or other library) in exchange. Javascript is not fully OOP, and intepret language (no compile error), making it harder (maybe there is some framework like coffeeScript which I haven't yet explore). In addition, it is harder to make since it will need server request / response activity in between the process, which I will describe later.
Request / Response Client-Server Architecture
This means that most process in client will need to request for the server (request for data to display, request to modify the data, etc). It also means that you lose the ability of control event, even if you use asp.net webform (it still need some tweaks for the event to work). However I assume you already used the WCF so this kind of architecture must be that hard.
Security
Don't keep important information such as password, etc in client (hidden field, javascript variable, etc). The concept should be the same with multitenant client, however in browser, user has free access to debug your webpage.
Concurrent and Multithreading
In browser, it is easier for multitab page and concurrent process will be very highly to occur. Your code must able to handle the multi threading for client side. For server side, you can still use your WCF to handle concurrencies.
My 2 cents.
Obviously the web application has its own challenges. I hope this link can help you in some aspects: http://msdn.microsoft.com/en-us/library/ee658099.aspx
Along with those you need to focus on non-function requirements like extensibility and scalability etc. too.

Using Node.js as an access point for mobile application API

I'm not sure if I've quite grokked node.js yet, but I really want to implement it, cause what I do understand is pretty friggen sweet.
I've got a mobile application that uses an API from a third party. Users typically open it up to see if anything is new. It occurred to me that so long as I respect the third party API's polling limits (and other restrictions) I could simulate a push based system and allow the user to be notified once something is new.
Basically implement all the API polling from a Node.js server on some sort of interval, and make the mobile app point to my Node.js server instead of the end point API.
I figure that this will be good for a number of reasons:
Takes load off the phone's data usage (since I can cache things on both the phone and the server). This is a huge win for users who have a pay-per-byte data plan
Allows a central location for storing / accessing all the data
Lets me do some optimization on the server side (if two users happen to subscribe to the same feed I can get that in one request.
I figure this could be bad for a number of reasons:
If my server goes down, then all my apps die. By acting as a go-between my Node.js implementation may very well introduce a higher number of fail points.
When the third party releases additions to the API, it requires me to implement the changes in two places, instead of one.
My question is this: In general, is this good practice? If not, why?
Your proxy idea is fine, in that it:
converts poll to push
insulates the client from API changes
allows for some optimizations
I feel only #1 is really important.

Browser-based front-end to database-like server app

I am looking to develop a browser-based front-end/client to what is essentially a database-like back-end/server.
The server application will need to access some local hardware I/O and will be logging events to a database (or even a fixed format text file).
The front-end needs to display real-time status of the remote I/O, as well as be able to browse the event log by date. This means that the server will likely need to be able to push to the client as events happen or status changes.
My background is in embedded/firmware, assembly, C/C++, and I have done a fair bit of work with Windows/MFC clients providing UI to devices via TCP/IP, UDP, and serial connections, but I don't have any web based experience.
The number of choices for web development these days is overwhelming, so I am really looking for somebody with experience to point me in the right direction for which technologies/platforms to consider researching. (ie. AJAX, ASP.NET, NODE.JS, Javascript, PHP...)
I suspect providing the information to the front-end will be the easier part, and that the back-end will require two parts (one app/service to interface with the hardware and create a database/file that the other part can access and serve to the client).
What tools/platforms/technologies would you recommend to tackle this, and why?
Any advice is appreciated. (Links to references/tutorials also appreciated).
Thanks!
I would recommend looking at the ext.js framework. This is a client-side framework that is server agnostic that can greatly speed up development. Being a client framework it is based on JavaScript an can talk via AJAX with JSON/XML to server-side systems. It offers a very rich and professional experience and wirth the $595 price tag.
You build most of your application client-side and it can works with almost any back-end. The engine is fast enough to display real-time data and has a strong developer community.

How to realize communication between browser and backend?

I have a backend software that needs to be able to communicate with a gecko-based web browser (and vice-versa). What is the best way to realize this? Since HTTP is rather one-way (with the exception of e.g. reverse AJAX which I consider to be quite "hacky") I am wondering how to do this.
Would creating an NPAPI-based plugin be an option? Based on the data exchanged between the browser and backend, the browser needs to manipulate the DOM of a webpage. The manipulations need to be quite dynamic and communication speed is an important requirement.
I am glad for any help pointing me in the right direction or providing useful resources that might be worth reading!
Writing browser plugins isn't quite trivial, if you can use alternatives like WebSockets (or their emulations like web-socket-js, see here and here for more details).
Only if such alternatives don't give you enough control because of special requirements should you consider writing a browser plugin.
With it you would get the full benefits of native code (high control over whatever API you choose) but also the problems that come with it:
you have to start to worry about privileges
bugs can crash the whole browser
you might have to handle behavioral differences between platforms and browsers
you have to worry about distribution on multiple platforms
...
If you need the higher level of control for some reason you could
implement the connection handling of your choice in the plugin
let the JavaScript initiate connections and send data
let the JavaScript register handlers for incoming data etc.
on incoming data call those handlers and pass them the data
To get started with NPAPI plugins see here, to support IE too you'd have to write a content extension. Finally i would advise to take a look at FireBreath that already does much of the heavy lifting for you (hides the different APIs for IE and NPAPI, gives you a higher level API, fixes for browser bugs included, ...).

Real time browser game server

I'm mostly looking for setup advise and pointers on how to go about going about this. I'll explain in as much detail as I can think and also note possible approaches that may be plausible.
The aim of this is to create a real time browser game, the best method that I have found for my needs would to use "long polling" with ajax, which will basically setup a request with the server that will "hang there" til the server has something to send it, then re-establish the connection upon receipt for more data. For my purposes this will handle a chat system aswell as character movement, IE: if a player enters the same area the clients there will recieve a response to inform them and thus update the browser client to show this.
The above is relatively easy to implement and I have already made a test-case for it, however I want to improve on it, on the server side it runs a loop for X amount of time before it'll auto timeout and send back and empty string, so another connection can be made, this is to prevent infinite loops and use up resources in cases where it shouldn't. Instead of looking up the database on each loop cycle (would be expensive I believe) for messages that need sending to the client, I use flatfiles, if a file has a modified timestamp greater than the last message sent to the client, then there is something new to send. However I believe this would also be expensive (not as much as using a mysql database though?) when done a couple of times per second.
My thought process on this was to have a C++ program (for speed) constantly running, and use that for very fast lookups in memory for new messages and so fourth, this would also give me the added bonus of being able to have bots within the game that the server can control for a more real-time feel/approach, however I have no clue if this is even possible and my searches on google have been fruitless.
The approach I would most love to be able to do, is to continue to use PHP to do the rendering and control of the page etc, and have the ajax requests go to the C++ application (that will always be running) that can handle all the real-time aspects.
CGI defeats the purpose of the above approach, as it creates a new instance of the application on each request, which is both slow and exactly what I do not want, I have php for that and don't want to switch one perfectally running language for another that would be better suited, PHP however (to my knowledge) can't store things in memory (ram) and so fourth.
Another approach that I have thought about was to use php sockets to connect into the C++ application, though I have no idea how feasible this may be. The C++ application only basically will need to control bots (AI) and the chat system messages.. I have absolutely no idea how to go about handling bots via PHP.
I hope this fully explains what my intentions and goals are, so if anyone has any pointers or advise then please reply and help me out, it would be very much appreciated. If you need any extra information (for if I didn't cover something or something very well) then I'll be happy to attempt to better explain.
How fast do the reactions need to be? For anything approaching real-time action games, AJAX/Comet is going to be much too slow. The overhead is also really depressing.
The way forward for that kind of thing will probably be WebSocket, with a custom server on the backend. But I don't think that means you need to resort to C[++] for this; the bottleneck is most likely going to be the network and not server processor power.
I'm using a Python SocketServer with a trivial message replication system — all the game logic in my case is on the client-side, with some complicated JavaScript maintaining a consistent game world in the face of lag — but even for a more complex server-side I think a scripting language will probably be just fine.
WebSocket isn't ready yet; there are no mainstream browser implementations. In the meantime I'm using a Flash Socket backup that emulates the WebSocket interface. Flash Sockets have their own problems in that they fail to negotiate proxies, but they are fast and hopefully the need for them will diminish as WebSocket arrives properly.
Reading your post sets alarm bells ringing.
How familiar are you with multi-threaded code? With C++? If the answer is "not very", then I fear you might be biting off a quite a large chunk. Why not take advantage of some existing (tried and tested) COMET server implementations rather than this barebones approach? Whatever application you have in mind, it should be quite separate from the comms implementation.
As someone who has implemented a such a server, I can tell you that it will take many design iterations and a helluva long time to get right. Testing such a product realisticly is also a very tricky process.

Resources