How long until you adopt a new specification (like HTML 5, for example)? - browser

When a new specification comes out (like HTML 5) it can be tempting to begin using its enhancements; however, how do you deal with the fact that not all browsers will be up to snuff with the latest and greatest specs? Surely, it's no fun having to code the same thing twice. While we can take advantage of things that degrade gracefully, isn't it just easier to use what's available to all of today's common browsers? What's your practice (or waiting period) for adopting new specs?

In the case of HTML5, I will probably no adopt it for any "core" functionnaly of a website before the required functionnalities are supported by the webbrowers used by something like 90% of my users -- which means, unfortunatly, not that soon for any "general" site :-(
Maybe when something like 80% of my users support the most imteresting parts, I'll start using those, and degrade gracefully for the others, though...
But you're not always the one deciding : your clients are often the ones who choose... And if they're stuck with IE6 because of company-policy and the like... and yes, there are too many users stuck on IE6, without the ability to upgrade / use anything else.
For instance, take the new <video> tag : how will you convince your clients that you should use it in their website, when they already have some embeded flash stuff that works just fine for more users than the ones who would be able to read the <video> tag ?

HTML5 is a special beast. A lot of times it simply specifies the common behaviour as implemented by the browsers, which means there are parts you are free to use just now. If you for example use the simple doctype or encoding declaration, you should be fairly safe as far as browsers go. Some other parts add behaviour that does not really need to be supported by the browser much, for example the custom data attributes. Yet some other parts of the specification can be easily implemented by javascript if the browser does not support them. In this sense you can adopt the advanced form handling, dropping the javascript solution once all the supported browsers implement it natively. So there’s definitely not a single answer that would help you, and more so in the case of HTML5.
Also see many of the questions under the html5 tag here on SO.

I won't use HTML 5 (which i really want to) until firefox and IE support it. Since most of my development is corporate internal, all that matters to me is IE. But externally, chrome (which is furthest along here) has the least amount of market share. If both firefox and IE support it, I am good.

Related

Securely running user's code

I am looking to create an AI environment where users can submit their own code for the AI and let them compete. The language could be anything, but something easy to learn like JavaScript or Python is preferred.
Basically I see three options with a couple of variants:
Make my own language, e.g. a JavaScript clone with only very basic features like variables, loops, conditionals, arrays, etc. This is a lot of work if I want to properly implement common language features.
1.1 Take an existing language and strip it to its core. Just remove lots of features from, say, Python until there is nothing left but the above (variables, conditionals, etc.). Still a lot of work, especially if I want to keep up to date with upstream (though I just could also just ignore upstream).
Use a language's built-in features to lock it down. I know from PHP that you can disable functions and searching around, similar solutions seem to exist for Python (with lots and lots of caveats). For this I'd need to have a good understanding of all the language's features and not miss anything.
2.1. Make a preprocessor that rejects code with dangerous stuff (preferably whitelist based). Similar to option 1, except that I only have to implement the parser and not implement all features: the preprocessor has to understand the language so that you can have variables named "eval" but not call the function named "eval". Still a lot of work, but more manageable than option 1.
2.2. Run the code in a very locked-down environment. Chroot, no unnecessary permissions... perhaps in a virtual machine or container. Something in that sense. I'd have to research how to achieve this and how to make it give me the results in a secure way, but that seems doable.
Manually read through all code. Doable on a small scale or with moderators, though still tedious and error-prone (I might miss stuff like if (user.id = 0)).
The way I imagine 2.2 to work is like this: run both AIs in a virtual machine (or something) and constrain it to communicate with the host machine only (no other Internet or LAN access). Both AIs run in a separate machine and communicate with each other (well, with the playing field, and thereby they see each other's positions) through an API running on the host.
Option 2.2 seems the most doable, but also relatively hacky... I let someone's code loose in a virtualized or locked down environment, hoping that that'll keep them in while giving them free game to DoS or break out of the environment. Then again, most other options are not much better.
TL;DR: in essence my question is: how do I let people give me 'logic' for an AI (which I think is most easily done using code) and then run that without compromising the functionality of the system? There must be at least 2 AIs working on the same playing field.
This is really just a plugin system, so researching how others implement plugins is a good starting point. In particular, I'd look at web browsers like Chrome and Safari and their plugin systems.
A common theme in modern plugins systems is process isolation. Ideally you should run the plugin in its own process space in a sandbox. In OS X look at XPC, which is designed explicitly for this problem. On Linux (or more portably), I would probably look at NaCl (Native Client). The JVM is also designed to provide sandboxing, and offers a rich selection of languages. (That said, I don't personally consider the JVM a very strong sandbox. It's had a history of security problems.)
In general, my preference on these kinds of projects is a language-agnostic API. I most often use REST APIs (or "REST-like"). This allows the plugin to be highly restricted, while not restricting the language choice. I like simple HTTP for communications whenever possible because it has rich support in numerous languages, so it puts little restriction on the plugin. In fact, given your description, you wouldn't even have to run the plugin on your hardware (and certainly not on the main server). Making the plugins remote clients removes many potential concerns.
But ultimately, I think something like your "2.2" is the right direction.

Best practice for writing an internet site

I am writing an internet site.
I want that my internet site will be runnable at any browser (Chrome, IE, Safari).
For the client side. what is the best practice for writing a site, that can be runnable at any browser? (or probably, most of them)
What I do generaly:
Use the http://validator.w3.org/ for validating html or css.
Use the http://www.jslint.com/ for validating javascript.
Questions:
There are some headers on css, that related to broswer support -
what shall I put on css.
Should I avoid using, i.e 1px, and instead use : 1em (or relevant ratio) on my css file.
How far can I go for support (i.e IE 5 is old enough, that doesn't worth the time I may spend to support that browser? (I need
to know how much it is used arround the world).
I need some guidelines for best practice, and that my site can be runnable at most browsers.
Any suggest will be appreciated.
Thanks :)
Let me cover your questions one by one:
There are some headers on css, that related to broswer support - what shall I put on css.
You should definitely be using a CSS "reset". There are a number of these available, but the basic idea is to get rid of the differences between the default behaviours of various browsers. See this question for more on which one to use.
Should I avoid using, i.e 1px, and instead use : 1em (or relevant ratio) on my css file.
px and em both have their place. They're both good to use. em is better in some cases for making your site scalable, but there are plenty of good reasons to use px, and it's also perfectly valid to use them both in the same site. My advice: use whatever works best for you in any given situation.
How far can I go for support (i.e IE 5 is old enough, that doesn't worth the time I may spend to support that browser? (I need to know how much it is used arround the world).
You're right, IE5 is long forgotten. Most people I know have also now dropped support for IE6 and IE7. Both of them are down to virtually zero usage in most countries. If you have a specific need to support them, you will know about it already; if not, drop them. They are both missing some important features, and dropping them will make your life a lot easier as a developer.
For Safari, you'll need to support a few versions, as people with Macs often don't upgrade their OS and may be stuck on a lower Safari version.
For Chrome and Opera, you only need to support the current version.
For Firefox, you need to support back to FF17 (the current extended support version).
You should also consider mobiles. You need to make your own decisions about this; there's a lot of mobile devices out there, and a wide range of browsers and versions.
I need some guidelines for best practice, and that my site can be runnable at most browsers.
Use sites like http://CanIUse.com to check browser compatibility for any specific features you want to use. If you need to use a feature that isn't supported in some browsers, you may still be able to use it by making use of a polyfill script. But don't use too many.

Is it or should it be possible to modify the GUI of an application after it's compiled?

I'm a Linux user, and I have been very hesitant to use Glade to design GUIs, since the xml files it produces can easily be modified. I know it doesn't sound like a major issue, but what if it's a commercial app that you just don't want people changing?
I use Mac OS X every once in a while, and I figured out that they use files called ".nib"s for GUIs. I think they're essentially the same type used in Nextstep and Openstep (there's even a Linux app which lets you edit these files). Anyway, these files are included in the application bundle, and according to some people, are completely editable. This person claims he even successfully edited Keynote's interface.
Now, why would that be possible? Is it completely okay for the end user to change the interface? Or is it better to have the GUI directly in the compiled application code, like traditional GTK apps?
OS X nib files are one option; the other option is to do things programmatically. For android, XML files can define the GUI or program code can do it. In Windows WPF, the UI is made in XML. Firefox/Mozilla? XUL, another XML-based UI language.
Most modern GUI toolkits have either both of these options or even just defining UIs in files.
But even binaries are modifiable. With a good binary reverse engineering tool, it's wide open. The only way to be really certain is to do what Apple did with iOS, and run signed code; the entire bundle is signed by a key and can't be run if modified.
This isn't a problem for most everyone. Why do you care if the UI is modified? The underlying code isn't, so functionality can't be added or modified.
As a corollary (and a little off-topic) something that you might have a valid concern about is stuff a little more like this.
I don't really see a problem with it. If a user messes up his UI, then it's his problem. Think of it like moddable games. Users always loved them, and in the end, most games benefit from it. There is usually nothing secret about an application's user interface. If there is, you could always do some sort of encryption.
As others have said, you can also add checksums if you just want to disallow editing.
The xml specifies little more than what the interface looks like. Without the compiled-in event handling code, it's pretty much useless. My opinion is customers change it at their own risk, and you might actually get some free useful improvements out of their hacks.
If you're really paranoid about people changing it, you could always add an MD5 digest verification step or something when you load the xml, or compile the xml string into a header file, but that defeats many of the benefits.
The theming engine can make substantial-looking changes to your GUI, as can tools like Parasite. Updating the Glade layout — at their own risk — is much safer than either of those.
What's wrong with users customizing the UI anyway?

Can I (relatively easily) test ZK interfaces in Watir?

How easily will Watir interact with a ZK interface? If "not at all" do you have any recommendations for automated testing of the web interface for me?
Edit: Another way to put this would be can I test a Spring/ZK generated page (Ajax/JScript). I found another issue too: I need not to use a proxy to test (like Sahi does) if at all possible.
Edit: I have been testing ZK interfaces now for quite some time. With a higher knowledge of Watir (and now webdriver) I can say it's definitely possible. Timing isn't usually an issue, but finding the elements certainly can be as the ids are dynamically generated. I recommend a strong, maintainable, object oriented approach with a powerful and dynamic DSL, or you'll be listing every element on the page in a custom built object library of some sort. So... it works, but it needs extra effort.
If you're talking about this: http://zssdemo.zkoss.org/ you can take a look at the DOM output, it's atrocious, but possible to test it with Watir. I've dealt with some apps that generate awful output like that. It makes for a challenge. :) Search the Watir google group for testing Ajax, plenty of people do it.
HTH,
Charley

How to organize libraries and links of programming information

I have an email account whose sole purpose it is to store interesting and useful links to programming articles, code, and blog posts. It has become a little knowledgebase of sorts. I can even do a search on it, which is pretty cool.
However, after using this account for a couple of years, I now have 775 links, and it has become this big blob of amorphous information, most of which I have never looked at again. I take comfort in the fact that, if I really needed to, I could find something in there again, if I even remember putting it in there in the first place. But it has developed a "smell," if you will.
How are you organizing your programming library of cool stuff? Do you have a system or tool, and is it better than the way I'm doing it?
I would use something that is made for storing bookmarks. I use delicious.com for all of my bookmarks. The tagging system works perfectly for technology sites because you can tag each page with a specific language or tech abbreviation. This coupled with the Delicious Bookmarks plugin will make it very easy to tag sites and get back to them.
Use one word or abbreviations for languages: java c# vb.net python
Use acronyms for technologies: wpf wcf
I used to use the standard bookmark system in the browser but since I bounce around through various machines and browsers throughout the day I started to use bookmark synchronizers. Both Foxmarks and the one that google came out with. But neither I was completely satisfied with. Plus delicious has a great web interface to it as a decent api to extend for your own purpose.
IMHO, using Evernote to store this information is great.
1) you can go back and search through it easily
2) organize by tags and "notebooks collections"
3) available on multiple platforms (even mobile)
4) available as browser plugins (for direct archiving in-browser)
The only drawback is it's copy-paste functionality is a little lacking (it sometimes doesn't import/display the CSS styles correctly).
Otherwise, it's a great alternative to store web "bookmarks" (and also archive the content at the same time).

Resources