Where does the browser fail as a client - browser

Where should the browser be improved upon to help improve application experiences?
For instance some of my main gripes are
A) Different browsers will need different configurations / plugins (I dont want to download different JRE's, RIA platforms such as flash, silverright, gears so forth)
B) I want to always be able to drag data from my desktop to a 'Webapp'. I don't like clicking browse for file and then uploading it. I think this is something that should be easily handled.
Additionally based on the above point - I'd like for it to be very easy to drag information from a web page to my computer to be used in whatever shape form or matter needed. For instance I'd like if I could drag my user ID from stack overflow into my mail / crm client which would take relevant information and maybe even build a picture up of my knowledge.
What else am I missing ?

I see that current problem is ever-growing pile of technologies. Just connect straight to users browser API calls via RPC. RPC is future way to go and it puts end to this tech-piling-up-trend. Currently security is too weak for this kind of technology. Maybe some sort of virtual sandbox would fix that.
See RPyc theory of operation and screencast, it explains the idea pretty well.

Related

Hybrid App Development, Database-Driven Content

I've been doing a lot of research, and perhaps just need a few dots connected.
I have an idea for a mobile app/website that contains lists of local eating/drinking establishments along with the deals/specials they offer each day. The idea is to create an app that people can refer to in order to save money on a night out.
I'm familiar enough with HTML/CSS/JS to create a functioning website, but when it comes to backend I'm a little confused. Editing the markup in order to reflect changes (e.g. a new deal starts or new establishment opens up) is a bit cumbersome. Now I know I want a database with my information ready to be displayed on my page. Does this mean that I need to develop my own API for everything, and then make sure it integrates with the hosting website that I end up choosing?
I feel like I'm missing something that should make it obvious what the next step is. Can anyone offer any advice?
The short answer is yes, you are exactly right.
The long answer is that is definetly one way to do it. But, for large projext just using JS can get quite cumbersomoe on your client end. Usually the first level would be using something like ajax. It's a great way to start and you can go a long way with just ajax. This is acutually where most people "start" when using just javascript to make api calls. The next level would be to use a framework like Angular. This will of course do more for you than just help handle api calls and it requires a larger investment in learning.
So that is all client side...
Now for the server side part... When you publish a website you are now dealing with "server-side" content. You have taken your static content and it is served up from the server but it's always the same static content from the server then it becomes dynamic on the client when all the javascript starts getting parsed.
The API is another server side component. But instead of being static like your pages, a bunch of files just sitting there, it is an actual application on the server. It takes a command via an api request and then does its thinking and then spits out a response object dynamically to the requester, which in this case will be the JS on your site.
Now, if you don't like the idea of learning to make your own API there are resources out there that will host an api for you and give you a gui to build your own API. I can't recommend one because I have never used one, but I do work with businesses that do and they love the fact they don't have to hire a dev to make thier apis. The downside is they are tied to that service and limited to the functionality that the service offers. It's not a big limitation as the services are quire powerful but if you are going to be managing complex data sets then it would probably be better to learn to make your own api.
Hope that clears things up a bit for you!

Reactive, long-running sequences and persistence in the cloud

I am to build a kind of website tracking system.
Think of a website where users click on various links – a unique user id and an identifier of the page tracks all page views.
Now, a single user might view 20 pages – some relevant some not. What I want to track is if a user follows a specific “path”. Example “Home Page” -> “Product A Page” -> “Get more info page” -> “Buy” -> “Paid”. There might be other page views in between each of these steps; the important thing is IF a user follows a given pattern.
In addition, I need to measure time between each step (each page view has a timestamp).
I have been playing around with Reactive Extensions, but I am not an expert in the area so I would like to hear if this would be a job for the Reactive Framework or if other technologies are more suitable?
I imagine a server getting a stream of website page views and then some fancy reactive LINQ queries that captures the events (this is where I need some help).
Next question that comes to my mind is how do you host this behind a load-balancer (on Windows Azure)? If you run two instances and the “Home Page”-page view goes to instance 1 and “Product A Page” goes to instance 2, how do they communicate about this or should some kind of sharding e.g. per userid be enforced?
Lastly, what about persistence? How to store? Should you store data in an Event Queue pattern and then load everything into memory when you “play back” from a restart of the server?
I know that were many questions, but I do love the philosophy behind Reactive Extensions; I just cannot get my head around how to “put into production in the cloud” :)
Thanks!
Casper
There are lots of solutions out there in this space already that you can integrate into your platform. Are you sure you're not reinventing the wheel? Google Analytics has functionality similar to this. If you need to go your own way, then SQL Server StreamInsight might be a better fit.
For behind-the-firewall solutions, Also look at http://piwik.org/ (free, open-source) and http://www.haveamint.com/.

Allowing users to point their domains to a web-based application?

I'm possibly developing a web-based application that allows users to create individual pages. I would like users to be able to use their own domains/sub-domains to access the pages.
So far I've considered:
A) Getting users to forward with masking to their pages. Probably the most in-efficient option, as having used this before myself I'm pretty sure it iFrames the page (not entirely sure though).
B) Having the users download certain files, which then make calls to the server for information for their specific account settings via a user key of some sort. The most efficient in my mind at the moment, however, this requires letting users see a fair degree of source code, something I'd rather not do if possible
C) Getting the users to add a C-NAME record to their DNS settings, which is semi in-efficient (most of these users will be used to uploading files via FTP hence why B is the most efficient option), but at the same time means no source code will be seen by them.
The downside is, I have no idea how to implement C or what would be needed.
I got the idea from: http://unbounce.com/features/custom-urls/.
I'm wondering what method of the three I should use to allow custom urls for users, I would prefer to do C, but I have no idea how to implement it (I'm kind of asking how), and whether or not the time spent learning how-to/getting that kind of functionality set-up would even be worth it.
Any answers/opinions/comments would be very much appreciated :)!
Option C is called wildcard DNS: I've linked to a writeup that gives an example of how to do it using Apache. Other web server setups should be able to do this as well: for what you want it is well worth it.

What user-information is available to code running in browsers?

I recently had an argument with someone regarding the ability of a website to take screenshots on the user's machine. He argued that using a GUI-program to simulate clicking a mouse really fast to win a simple flash game could theoretically be detected (if the site cared enough) by logging abnormally high scores and taking a screenshot of those players' desktops for moderator review. I argued that since all website code runs within the browser, it cannot step outside the system to take such a screenshot.
This segued into a more general discussion of the capabilities of websites, through Javascript, Flash, or whatever other method (acceptable or nefarious), to make that step outside of the system. We agreed that at minimum some things were grabbable: the OS, the size of the user's full desktop. But we definitely couldn't agree on how sandboxed in-browser code was. All in all he gave website code way more credit than I did.
So, who's right? Can websites take desktop screenshots? Can they enumerate all your open windows? What else can (or can't) they do? Clearly any such code would have to be OS-specific, but imagine an ambitious site willing to write the code to target multiple OSes and systems.
Googling this led me to many red herrings with relatively little good information, so I decided to ask here
Generally speaking, the security model of browsers is supposed to keep javascript code completely contained within its sandbox. Anything about the local machine that isn't reflected in the properties of the window object and its children is inaccessible.
Plugins, on the other hand, have free reign. They're installed by the user, and can access anything the user can access. That's why they're able to access your webcam, upload files, do virus scans, etc. They're also able to expose APIs to javascript code, which pokes a hole in the javascript sandbox and gives javascript code some external access. That's how tools like Phonegap give javascript code in web apps access to phone hardware (gps, orientation, camera, etc.)

What pure server side technologies allow html website capture on shared hosting

As far as I know, only PHP can't be used for this.
But since not many providers allow installation of Perl/Python/... scripts on shared, I'm wondering whether there is free solution for either
creation of thumbnails or full-size capturing on the fly / on demand and save it to server (since snapshot lets you only to show thumbnails on hover) - service
or
Flex/Flash solution to capture website and PHP to save it (or save it right with flex/flash) - code to run on server
Is it possible?
To capture how a website looks like, you first need somebody to render it.
Because you are usually optimizing a web site to run on the major browsers, you will want one of them to handle the rendering.
This (opening a browser instance, opening a certain web page, rendering it and dumping a screen shot of the results) is possible - it's how services like browsershots.org work.
It's just not trivial to set up, and requires total freedom in setting up the server (i.e. administrator privileges to install programs, set rights, etc.). It definitely is not possible to do with pure PHP, Perl, Python, or any other scripting language, on a restricted shared hosting environment.
If you're on Windows, the answer to this question may be of help.
For a list of snap shot services, see this question.

Resources