What's the most secure way to send data from a-b? [closed] - security

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If I had let's say a sensitive report in PDF format and wanted to send it to someone, what is the most secure way?
Does a desktop application make it more secure? Since we are basically doing a client to server communication via private IP address? Then add some kind of standard encryption algorithm to the data as you send it over the wire?
What about a web based solution? In web based, you have a third person in the loop. Sure, it would do the same kind of encryption that I would have on a desktop.. but now instead of client->server directly, you have client->server | server<- client... You also have exposure to the broad internet for any intruders to jump in, making yourself more open to man-in-middle attack... One thing the web has going for it is digitial certificates but I think that is more authentication than authorization.. which the desktop problem doesnt have?
Obviously from a usability point of view - a person wants to just goto a web page and download a report he's expecting. But most secure? Is desktop the answer? Or is it just too hard to do from a usability perspective?
OK there seems to be some confusion. I am a software engineer and am facing a problem where business users have some secure documents that they need to distribute - I am just wondering if using the web and SSL/CA is the standard solution to this, or maybe a desktop application could be the answer??

The method that comes to mind as being very easy (as in it has been done a lot and is proven) is just distributing via a web site that is secured with SSL. It's trivial to set up (doesn't matter if you're running Windows, *nix, etc) and is a familiar pattern to the user.
Setting up a thick client is likely more work because you have to do the encryption yourself (not difficult these days, but there is more to know in terms of following best practices). I don't think that you'll gain much (any?) security from having to maintain a significantly larger set of code.

Most secure would be print it, give it to a courier in a locked briefcase, and have the courier hand deliver it. I think that'd be going overboard, though :)
In real world terms, unless you're talking national security (in which case, see courier option above), or Trade Secrets Which Could Doom Your Company (again, see courier option above), having a well encrypted file downloaded from the web is secure enough. Use PGP encryption (or similar), and I recommend the Encrypt and Sign option, make the original website a secure one as well, and you're probably fine.
The other thing about a desktop application is: how is it getting the report? If it's not generating the report locally, it's really doing just as many steps as a web page: app requests report, report generated, server notifies client, client downloads.
A third option, though, is to use something other than the website to download the reports. For instance, you could allow the user to request the report through the web, but provide a secure FTP (SFTP or FTPS) site or AS2 (or AS3) connection for the actual download.

Using a secure file transfer (or managed file transfer) is definitely the best option for securely transferring electronic data. There are smaller, more personal-use solutions out there like Dropbox or Enterprise solutions like BiscomDeliveryServer.com

Print it off, seal it in an envelope, hire some armed guards for protection and hand deliver it to them.
You may think its a silly answer, but unless you can identify what your threat vectors are any answer is pretty meaningless, since there is no guarantee it will address those threats.

Any system is only as secure as it's weakest link. If you sent the document securely and the user downloaded / saved it to their desktop then you'd be no better off than an unsecure system. Even worse they could get the docuemnt and then send it onto loads of people that shouldn't see it, etc. That leads on to a question whether you have an actual requirement that they can only view and not download the document? If not, why go to all this effort?
But if they are able to down load it, then the most secure method may be to send them an email telling them that the document is available. They then connect to a system (web / ftp?) using credentials sent separately to authenticate their access.

I'm surprised no one has mentioned a PK-encryption over email solution. Everyone in the "enterprise" gets a copy of everyone else's public key and their own private key. Lots of tools exist to do the heavy-lifting. Start with PGP and work from there.

Related

How To Improve Security For Simple File Download From A Web Server?

Dear StackOverflow community,
======================================
TL;DR VERSION:
Before we proceed further in our relationship with a cloud web portal provider, I'd like to insist that they provide us a secure way to obtain a copy of our data from their web server.
Secure for authenticating ourselves without leaving ourselves vulnerable to having our credentials stolen or spoofed and
Secure for the file in transit on its way back to us.
I suspect I might have to point them in the right direction myself despite my own inexperience in the field. What kinds of simple-yet-secure approaches to authenticating us could I ask them to look into?
======================================
FULL POST
BACKGROUND:
At work, we are evaluating a cloud-based portal through which our current and former customers will be able to network with each other (we have customers who interact with us in cohorts).
The user interface of the portal is well-designed, which is why we're thinking about buying it, but the company providing it is young. So, for example, their idea of "helping us integrate our portal data with SalesForce" was to have a link within the administrative control panel to a page that returns a CSV file containing the entire contents of our database.
"Fetch a CSV" actually is fine, because we already do it with other CSV files from our ERP (pushing to SalesForce with a data loader and scheduled Windows batch scripting on an always-on PC).
I said we could work with it as long as they provided us a way to fetch the CSV file programmatically, without human intervention, at 5AM. They did so, but the solution seems vulnerable to exploitation and I'd like guidance redirecting their efforts.
A DIVERSION ABOUT THE HUMAN UI:
The link one sees as a human using the web interface to the portal under consideration is http://www.OurBrandedDomain.com/admin/downloaddatabase
If you aren't already logged in, you will be redirected http://www.OurBrandedDomain.com/Admin/login?returnUrl=admin/downloaddatabase , and as soon as you log in, the CSV file will be offered to you.
(Yes, I know, it's HTTP and it's customer data ... I'm planning to talk to them about turning off HTTP access to the login/signup forms and to the internals of the site, too. Not the focus of my question, though.)
THEIR PROPOSAL:
So, as I said, I asked for something programmatically usable.
What they gave us was instructions to go to http://www.OurFlavorOfTheirSite.com/admin/fetchdatabase?email=AdminsEmail#Domain.com&password=AdminsPassword
Please correct me if I'm wrong, but this seems like a really insecure way to authenticate ourselves to the web server.
HOW I NEED HELP:
Before we proceed further in our relationship with this portal provider, I'd like to insist that they provide us a secure way to obtain a CSV copy of our data.
Secure for authenticating ourselves without leaving ourselves vulnerable to having our credentials stolen or spoofed and
Secure for the file in transit on its way back to us.
However, I don't get the sense that they've really thought about security much, and I suspect I might have to point them in the right direction myself despite my own inexperience in the field.
What kinds of simple-yet-secure approaches to authenticating us could I ask them to look into, knowing nothing more about the architecture of their servers than can be inferred from what I've just described here?
The solution doesn't have to involve us using a browser to interact with their server. Since we'll be downloading the file in a Windows scripting environment without human intervention, it's fine to suggest solutions that we can only test programmatically (even though that will make my learning curve a bit steeper).
(I suppose the solution could even get away from the server providing the data in the form of a CSV file, though then we'd probably just end up rebuilding a CSV file locally because we have infrastructure in place for CSV->SalesForce.)
Thanks in advance.
Yes, that is insecure.
You should insist on using TLS. For this they need to install a certificate from a Certification Authority to verify that they own the domain OurFlavorOfTheirSite.com. This will enable the URL to use HTTPS which means communication is encrypted, and authenticated (i.e. another website cannot spoof OurFlavorOfTheirSite.com without a browser warning being displayed).
Although the email=AdminsEmail#Domain.com&password=AdminsPassword parameters will be encrypted, these should be submitted via POST rather than GET. The reason is that GET query string parameters are logged in browser history, logged in proxy and server logs by default and can be transmitted in the referer header when resources are included from other domains.

making a website local

I'm going to build a website for file manipulations. The idea is that the user will manage to upload his files to the website, and click the "manipulate" button, then he will get the resulted file. Also the user will have to pay in accordance with the amount of files he's trying to manipulate.
The code for the file manipulation is already written in JAVA.
The thing is, some of these files will probably be truly sensitive and private, so users will not be delighted to upload to my site over the internet.
I thought about making a local version of the website, and let the user download it (the local version) to his computer (and the only access the internet will be for the payment action).
But there seem to be two problems:
When i'll decide to change anything in my website, it will not affect the local users.
The local site will be very easy to "crack" in order not to pay...
This is my first website,
do you have any suggestions of how to solve one of these 2 problems?
Thanks!
Concerning question
(1) you would have to implement some update mechanism, for example your "local web site" (which might be a .jar file containing a web server) could check over the internet if a new version is available and then download and install it (however, you should generally ask for user's permission to do so, as many users are not delighted with silently auto-updating software). Concerning question
(2) you might use some code obfuscator to make your compiled java classes more difficult to decompile, and use an encrypted SSL connection for the transactions related to payment (while checking for server certificate to avoid man-in-the-middle attacks by the end user); however any software that a user can have on its computer will be eventually cracked by somebody. Therefore, the best solution is possibly to keep all on your server, while securing as much as possible the whole: use encrypted connections with SSL for everything, or even if the files are highly sensitive, provide a public key so users can encrypt their files with GPG (or similar software) before sending them to the site, and encrypt the files to be sent back to the user by using its public key (that he/she has to provide you and that is not critical at all to be transferred over the internet). Also carefully check the security of your web server and all the software running on it, to avoid bugs that might allow somebody to hack into it. Using the encryption with GPG/public keys and only storing encrypted data on your server might be already a good protection (but you have to make sure that it is impossible to get your private key in any way!).

How would you attack a domain to look for "unknown" resources? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given a domain, is it possible for an attacker to discover one or many of the pages/resources that exist under that domain? And what could an attacker do/use to discover resources in a domain?
I have never seen the issue addressed in any security material (because it's a solved problem?) so I'm interested in ideas, theories, best-guesses, in addition to practices; anything an attacker could use in a "black box" manor to discover resources.
Some of the things that I've come up with are:
Google -- if google can find it, an attacker can.
A brute force dictionary attack -- Iterate common words and word combinations (Login, Error, Index, Default, etc.) As well, the dictionary could be narrowed if the resource extension was known (xml, asp, html, php.) which is fairly discoverable.
Monitor traffic via a Sniffer -- Watch for a listing of pages that users go to. This assumes some type of network access, in which case URL discovery is likely small peanuts given the fact the attacker has network access.
Edit: Obviously directory listings permissions are turned off.
The list on this is pretty long; there are a lot of techniques that can be used to do this; note that some of these are highly illegal:
See what Google, archive.org, and other web crawlers have indexed for the site.
Crawl through public documents on the site (including PDF, JavaScript, and Word documents) looking for private links.
Scan the site from different IP addresses to see if any location-based filtering is being done.
Compromise a computer on the site owner's network and scan from there.
Attack an exploit in the site's web server software and look at the data directly.
Go dumpster diving for auth credentials and log into the website using a password on a post-it (this happens way more often than you might think).
Look at common files (like robots.txt) to see if they 'protect' sensitive information.
Try common URLs (/secret, /corp, etc.) to see if they give a 302 (unauthorized) or 404 (page not found).
Get a low-level job at the company in question and attack from the inside; or, use that as an opportunity to steal credentials from legitimate users via keyboard sniffers, etc.
Steal a salesperson's or executive's laptop -- many don't use filesystem encryption.
Set up a coffee/hot dog stand offering a free WiFi hotspot near the company, proxy the traffic, and use that to get credentials.
Look at the company's public wiki for passwords.
And so on... you're much better off attacking the human side of the security problem than trying to come in over the network, unless you find some obvious exploits right off the bat. Office workers are much less likely to report a vulnerability, and are often incredibly sloppy in their security habits -- passwords get put into wikis and written down on post-it notes stuck to the monitor, road warriors don't encrypt their laptop hard drives, and so on.
Most typical attack vector would be trying to find well known application, like for example /webstats/ or /phpMyAdmin/, look for some typical files that unexperienced user might left in production env (eg. phpinfo.php). And most dangerous: text editor backup files. Many text editors leave copy of original file with '~' appended or perpended. So imagine you have whatever.php~ or whatever.apsx~. As these are not executed, attacker might get access to source code.
Brute Forcing (Use something like OWASP Dirbuster , ships with a great dictionary - also it will parse responses therefore can map the application quite quickly and then find resources even in quite deeply structured apps)
Yahoo, Google and other search engines as you stated
Robots.txt
sitemap.xml (quite common nowadays, and got lots of stuff in it)
Web Stats applications (if any installed in the server and public accessible such as /webstats/ )
Brute forcing for files and directories generally referred as "Forced Browsing", might help you google searches.
The path to resource files like CSS, JavaScript, images, video, audio, etc can also reveal directories if they are used in public pages. CSS and JavaScript could contain telling URLs in their code as well.
If you use a CMS, some CMS's put a meta tag into the head of each page that indicates the page was generated by the CMS. If your CMS is insecure, it could be an attack vector.
It is usually a good idea to set your defenses up in a way that assumes an attacker can list all the files served unless protected by HTTP AUTH (aspx auth isn't strong enough for this purpose).
EDIT: more generally, you are supposed to assume the attacker can identify all publicly accessible persistent resources. If the resource doesn't have an auth check, assume an attacker can read it.
The "robots.txt" file can give you (if it exists, of course) some information about what files\directories are there (Exmaple).
Can you get the whole machine? Use common / well known scanner & exploids.
Try social engineering. You'll wonder about how efficient it is.
Bruteforce sessions (JSessionid etc.) maybe with a fuzzer.
Try common used path signatures (/admin/ /adm/ .... in the domain)
Have a look for data inserts for further processing with XSS / SQL Injection / vulnerability testing
Exploid weak known applications within the domain
Use fishing hacks (XSS/XRF/HTML-META >> IFrame) to forward the user to your fake page (and the domain name stays).
Blackbox reengineering - What programming language is used? Are there bugs in the VM/Interpreter version? Try service fingerprinting. How whould you write a page like the page you want wo attack. What are the security issues the developer of the page may have missed?
a) Try to think like a dumb developer ;)
b) Hope that the developer of the domain is dumb.
Are you talking about ethical hacking?
You can download the site with SurfOffline tools, and have a pretty idea of the folders, architecture, etc.
Best Regards!
When attaching a new box onto "teh interwebs", I always run (ze)nmap. (I know the site looks sinister - that's a sign of quality in this context I guess...)
It's pretty much push-button and gives you a detailed explanation of how vulnerable the target (read:"your server") is.
If you use mod_rewrite on your server you could something like that:
All request that does not fit the patterns can be redirected to special page. There the IP or whatever will be tracked. You you have a certain number of "attacks" you can ban this user / ip. The most efficient way you be automatically add a special rewrite condition on you mod_rewrite.
A really good first step is to try a domain transfer against their DNS servers. Many are misconfigured, and will give you the complete list of hosts.
The fierce domain scanner does just that:
http://ha.ckers.org/fierce/
It also guesses common host names from a dictionary, as well as, upon finding a live host, checking numerically close IP addresses.
To protect a site against attacks, call the upper management for a security meeting and tell them to never use the work password anywhere else. Most suits will carelessly use the same password everywhere: Work, home, pr0n sites, gambling, public forums, wikipedia. They are simply unaware of the fact that not all sites care not to look at the users passwords (especially when the sites offer "free" stuff).

Best way to implement an SFTP server solution? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm currently setting up a commercial SFTP server and I'm just looking for some of your opinions on the set-up I'm currently thinking of implementing, as well as a recommendation as to what commercial Secure FTP server software would be best to suit. Bear in mind that the data i'm responsible for is highly sensitive so any comments/feedback is much appreciated.
Here's the scenario:
1) Before file upload, files are compressed & encrypted using AES 256 with a salt.
2) Files uploaded from the clients' server over SFTP (port 22) to our SFTP server.
3) Files are then downloaded over HTTPS by our other client using one time password verification (strong 10 char alphanumeric password)
The specifics of the implementation I'm thinking of are:
For part (2) above, the connection is opened using host key matching, public key authentication and a user name/password combination. The firewall at both sides is restricted to only allow the static IP of the client server to connect.
For part (3), the other client is supplied with a user name/password on a per user basis (for auditing) to log into their jailed account on the server. the encryption password for the file itself is supplied on a per file basis, so i'm trying to apply two modes of encryption at all times here (except when the files are resting on the server).
Along with dedicated firewalls on both sides, Access control on the SFTP server will be configured to block IP addresses with a certain number of failed attempts over a short time, invalid passwords attempts will lock out users, password policies will be implemented etc.
I like to think that I've covered as much as possible but I'd love to hear what you guys think about this implementation?
For the commercial server side of things, I've narrowed it down to GloalSCAPE SFTP w/ SSH & HTTP module or JSCAPE Secure FTP server - I'll be assessing the suitability of each over the weekend but if any of you have any experience with either i'd love to hear about it also.
Since the data is clearly both important and sensitive from your clients' perspectives, I'd suggest you consult a security professional. Home-grown solutions are typically a combination of over- and underkill, resulting in mechanisms that are both inefficient and insecure. Consider:
The files are pre-encrypted, so the only gain from SFTP/HTTPS is encryption of the session itself (e.g. login), but...
You're using PKI for upload and OTP for download, so there's no risk of exposing passwords, only user IDs -- is that significant to you?
How will you transmit the one-time passwords? Is the transmission secure?
Keep in mind that any lockout scheme should be temporary, otherwise a hacker can disable the entire system by locking each account.
Questions to ask yourself:
What am I protecting?
From whom am I protecting it?
What are the attack vectors?
What are the likelihoods and risks of a breach?
Once you've answered those questions, you'll have a better idea of the implementation.
In general:
Your choice of AES256 + salt is very reasonable.
Multi-factor authentication is probably better than multiple iterations of encryption. It's often thought of as "something you have, plus something you know," such as a certificate and a password, requiring both for access.
As far as available utilities, many off-the-shelf packages are both secure and easy to use. Look into OpenSSH, OpenVPN, and vsftp for starters.
Good luck - please let us know what method you choose!
So what's wrong with OpenSSH that comes with Linux and the BSDs?
Before file upload, files are compressed & encrypted using AES 256 with a salt.
This part rings some alarm bells...have you written some code to do this encryption/compression? How are you doing the key management? You also say your key is password derived, so your use of AES 256 and salt is giving you a false sense of security - your real key space is much less. Also the use of the term 'salt' is inappropriate here, which suggests further weaknesses.
You would be better off to use a well proven implementation (e.g. something like PGP or GPG).
Also, if you use PGP style public key encryption for the file itself (and decent key management), the security of your SFTP server will matter a lot less. Your files could be encrypted at rest.
The argument for the security of the rest of the system is very convoluted (lots of protocols, authentication schemes, and controls) - it would be a lot easier to secure the file robustly, then do best practices for the rest (which will matter a lot less and also be independent controls).

What would it take to make OpenID mainstream? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
OpenID is a great idea in principle, but the UI and the explanation as to why it is good are currently not tailored for general use -- what do you think it would take to make OpenID work for the general public? Can this be solved with technology, or is the problem so intrinsically hard that we are stuck with difficult explanations/multi-step registration procedures, numerous accounts, or poor security?
It needs to be much simpler: involve less knowledge of the concepts, and require fewer steps - preferably zero. When the technology works with little or no assistance, it'll take off.
The mechanics of OpenID credentials, providers and suppliers shouldn't need to be exposed to the user. People talk about educating the masses of internet users, but that's never going to happen - the masses never stop being stupid. If you want to appeal to the masses, you need to bring the technology down to meet their level instead. When a Google-affiliated site picks up that you're logged into Google and silently uses that account, it works without you ever having to tell it who you are. The fact that OpenID is so clumsy in comparison is why the big providers like Google are still avoiding it, and why the general public won't adopt it.
I think the developers of OpenID messed up when they used a URL rather than an email address for the IDs. People know what email addresses are, they already have one that's associated with them (or can get one easily), and email providers like Google and Microsoft are happy to adopt a role as portals. In fact, an automatic translation from email address to URL is all it would take:
myname#example.com -> http://www.example.com/openid/myname
I think it'll take a huge buy-in from a site that millions of people use; for example, MySpace is soon supporting OpenID, so now the number of users that OpenID supports has just jumped by a huge amount. If more of the high activity sites on the net follow this lead, there you go!
ISPs should provide openIds to all their customers that mimic their e-mail addresses. Perhaps openID needs to support automatic translation of foo#example.com into http://openid.example.com/foo so that ISPs can easily set this up on a separate server.
It will take all the popular sites supporting it and making it transparent to the user.
"You can make a useraccount here, or if you use MySpace, Google Mail, Hotmail, etc then you can sign in using OpenID."
Don't sell it as a new service, sell it as being able to sign in using a different ID from another site.
The issue, however, is that with everyone supporting it each user will now have a myspace id, google id, etc. Now if they sign onto stackoverflow with their myspace id then later with google they may be perplexed that stackoverflow doesn't recognize them.
I wonder if openid has a solution for linking openid accounts so they are one and the same - I doubt the technology allows for it, since they are essentially independant signing authorities. Google would have to share data with Myspace and vice versa to enable that...
I don't think it will become mainstream. I think Ted Dziuba gets it right when he says it solves a "problem" that most people don't consider to be worth solving.
http://teddziuba.com/2008/09/openid-is-why-i-hate-the-inter.html
It will have to get a hell of a lot simpler, with easier-to-remember IDs.
You mean it isn't already? ;)
Obviously a lot of currently-popular applications would need to offer it and make it obvious that it was a good alternative.
If Google and Facebook made it an obvious option, that would help.
Ultimately, user education will really be the thing that does it. I doubt most people would care though...dumb sheeple.
Many of the responses so far seem to boil down to two options:
user education, and
forcing adoption (lots of sites changing to openid from in-house auth.)
Is that all we can do? What about distributed tools to make it easy for casual users to do openid delegation? (Say, something integrated with OS X / Windows / Ubuntu) Are there technological barriers that make this infeasible?
If client-side (and vendor-issued) applications could let you manage your on-line security preference, then we'd possibly be able to combat some of the risks associated with giving random sites your passwords -- since the "login area" would be some local program sitting in your systray, or what not. Of course, the integration of web apps with the desktop (such as that provided by Chrome) may make such a distinction impossible in practice, so it may be a moot point.
In any case, it seems like there should be something we could do now to make openid more palatable to the general public, and speed adoption in addition to making the system more user friendly.
As someone who primarily programs web apps in Java, I can't/won't use OpenID because the library support isn't there. JOID and openid4java are the only two that I know of. JOID is apparently not actively maintained, not including really important patches that have been on the mailing list for months; and openid4java requires >40 megabytes of external dependencies, including some that need to go into the endorsed classpath, which is, as one user commented, ridiculous:
Comment by witichis, Apr 28, 2008
46MB download for a simple redirect and de/encryp - are you f****n' drunk?
In my opinion, OpenID is not bad. It consolidates login credentials. It does solve a real problem, while it may not be the optimal solution The only two problems I can see are that you must trust the identity provider not to allow someone else to claim to be you, and that relying parties (web sites you log in to) can collude to link your identity on multiple sites together.
I think we need to see OpenID offered as a login method more consumer oriented websites. There are a lot of big consumer sites that can be used as OpenID providers, but the only place I recall seeing OpenID available as a login before Stackoverflow is to comment on Blogger. Being a provider is great and all, but it's pretty much invisible to consumers. Seeing an actual place to use OpenID, on the other hand, will probably garner somewhat more interest.
It would certainly help if more OpenID consumers were also OpenID providers. As a developer, I'm comfortable going through a few contortions to figure out that I can create a new ID on openid.org, but the more mainstream consumer could easily be put off by the process.
The fact that big sites will accept OpenID isn't, on it's own, enough to make it mainstream. The closest I've seen so far was having LiveJournal both accept and provide OpenID authentication (which I believe it has been doing for quite some time).
But I think that just accepting OpenID isn't enough. What we really need is more sites like this one that refuse to make their own authentication system, and require OpenID authentication. If the "next big thing" said you have to use your OpenID to log in (with a really simple wizard to set up a new ID with someone else), I believe that it will start the ball properly rolling.
Browsers should auto-fill OpenID login boxes so that you don't have to remember your ID.
Web frameworks should come with it as the default, unless you take lots of extra time to configure a simple username/password combination.
Sites that use OpenID need to put it front and center on the login page. I have seen many sites hide it behind a link under the standard login/registration page like this:
Username:
Password:
or use your OpenID
Choosing a provider needs to be much simpler.
At present there's no way to know how reliable, trustworthy or secure any of them are, or which will still be around in 6 months time.
It won't be mainstream, as it's too much effort and is too confusing for those used to email address and password.
For example:
To login to stackoverflow with Opera I have to click login, select myOpenID from the list, type my username, hit enter, press Ctrl+Enter to autofill the password on the myOpenID site, then press the continue button.
To login into any normal site with Opera I just press Ctrl+Enter to autofill the saved user/pass combo.
Im looking into OpenId right now to integrate into a start up site so it can manage the login process for my site.
I think to make this main stream they need to make this super simple. Copy, paste code into your site and it loads the login form that gives you pretty much what Stackoverflow.com does.
I think you can style up the layout of the form to be more recognizable as well.
Personally I don't think it needs to be mainstream at all, it was an interesting idea, but it is no longer relevant.
When I create a normal login, I type in my username, master password and click on the SuperGenPass bookmarklet. That is it, when I had to sign up to stackoverflow I had to find an openId provider, sign up there (which took forever) login to my website and setup delegation, then add stackoverflow to my list of sites.
And yesterday I couldn't login because I had removed the file from my webhost and they had some security issue.
Conclusion: Don't use openid.
I'd use it if I could do it per-site and aggregate the identity later on my own time and terms. As it is, it's a giant pain in the ass to even find a decent OpenID provider; by decent I mean stackoverflow.com isn't one so I'm not going to bother.
Make it less open.
i do not want the same identity on multiple sites.
i do not want to have to create a flickr account before StackOverflow will let me post.
i do not have to have to create a new flickr account for each website that i want to register with.

Resources