Is the public UDDI movement dead or, was it ever alive? - discovery

I am trying to find some public UDDI registries to interact with, for learning purposes. But it seems there are none available. I popped the following question on SO to see if someone knows about any public registry still hosted, but got no answers.
The IBM, Microsoft and SAP public registries were a test of the UDDI technology. I quote from here: The primary goal of the UBR was to prove the interoperability and robustness of the UDDI specifications through a public implementation. This goal was met and far exceeded.
They now continue to support the UDDI specifications in their products (so, different companies can host their UBRs for private use).
Now, I am changing my original question to this: Is the public UDDI movement dead or, was it ever alive?
What do you think? If your answer is no, can you provide an example of an existing public UDDI UBR?

Public UDDI is indeed dead, but it managed to survive in private registries inside enterprises.
A UDDI registry's functional purpose is the representation of data and metadata about
Web services. A registry, either for use on a public network or within an
organization's internal infrastructure, offers a standards-based mechanism to classify,
catalog, and manage Web services, so that they can be discovered and consumed by
other applications.
http://uddi.org/pubs/uddi-tech-wp.pdf
This isn't bad for a definition and purpose, unfortunately it was applied at the web level.
UDDI was supposed to be the "yellow pages" of web services. If you wanted to find a web service providing a certain functionality, you would look it up inside the UDDI.
The idea was to use a standard (universal) mechanism for online interaction between SOA businesses components. You then dynamically looked up services, connected to them and do business automatically. And the decision for choosing between similar services was supposed to happen based on the metadata found in the UBR (all of it inside a very complex model which discouraged adoption) with no way of checking if the service actually did what you were expecting it to do.
But bringing every interaction to a common ground was impossible because businesses are highly heterogeneous. And businesses still revolves around people, human activity and human decisions.
Business are conducted between partners that choose to make business with each other only after thorough analysis and negotiation, before finally striking a business deal and agree on all terms and conditions. Only then their infrastructures are connected. And at this point the UDDI definition does start to make sense, because within the enterprise UDDI allows you to:
relocate services without any of the clients failing;
supports load balancing;
improves efficiency by reducing manual interventions within the infrastructure;
manage redundancy (if one service fails the clients will search for another service providing the same functionality);
etc
.. but all of this within a confined set of predetermined services who's functionality is well established and agreed upon.

I received an answer from John Saunders on my original question, to one of my comments, and I think he is right.
To summarize it:
The public UDDI movement is dead because the IBM, Microsoft and SAP public registries were the UDDI movement.

UDDI is indeed dead. Three things killed it:
Overambitious complexity
Ignoring security
The difficulty, still with us, of managing and collecting micropayments
If a UDDI broker dynamically chooses a service provider for me, I have no opportunity to do any due diligence on the security of the service. And how much trouble would the broker take to ensure security for me? Not a lot, I would suggest.
Web services are commonly used behind the firewall for SOA purposes, to integrate applications with business partners, and to call well-known APIs. UDDI is total overkill for these purposes. A large organisation should have a catalogue of its web services, but that could be as simple as a wiki page. A developer looking for a potentially useful web service needs a one paragraph description of what it does, a contact person, and some WSDL and technical documentation. UDDI is not necessary for any of that.

Not dead.
Apache jUDDI has a public snapshot available online
http://demo.juddi.apache.org/

Related

how to identify domains, subdomains and bounded contexts in an online retailer integration scenario?

The problem I'm facing is the design of an integration platform.
The company has different tools used for selling online financial services and wants to unify the selling process by creating a common integration platform.
Existing tools range from simply designing a tailor-made offer, to managing all the phases of listing to selling and supporting. The integration platform should orchestrate all the tools.
So I do approach this problem from a DDD point of view?
Domain: selling online services
subdomains: service catalog, request offers, sending offers, buying service, support customer.
bounded context? maybe integration with other company systems like identities and invoices?
My trouble with this is that some existing applications encompass several subdomains, others don't. Also, some applications working in the same subdomain have completely different languages, for example, service vs product, vs project...
How does an integration platform fit in this picture and how would you approach it from a DDD point of view? (or maybe it's a completely wrong approach and should I leave DDD inside each tool and treat them as bounded context?)
I recommend extracting the common bits of meaning (ignoring their names) from the various applications into common domains/bounded contexts. Each bounded context has anti-corruption layers that essentially adapt the language used in one or more existing applications to the one used in the common domain (and vice versa). Then you can cut over the existing applications piece-by-piece to use the respective ACLs to take advantage of the common domain implementation.
Eventually, you might even be able to dispense with the ACLs, as the language becomes more ubiquitous, but it's also perfectly okay to keep them around forever: the ACLs introduce some indirection (and possibly complexity, e.g. if they're deployed as their own microservices) but that's the price you pay for limiting coupling to the ACL.
(It's not clear from the question how experienced you are with DDD).

SaaS - How to prove users/client that they are using the same code always in the server?

Let's suppose we have an open source project running in a server.
Is there a common way to prove users that we're using the same code as the one published?
There is never an implicit guarantee that the remote service is what's described in its manifest, though generally the reputation of the service is what's directly considered.
What's more, SaaS itself is just a delivery model, and doesn't necessarily define a set of protocols or contracts between a client and a service. It merely defines an approach to building and serving a public platform. It's a term more relevant for describing the building process of a service and it's intended market than it is for describing the nitty-gritty operational details.
If such a thing needed to be implemented as part of the contract between the client and server, one could look at implementing a native hashing solution using HMACs. An identity mechanism could be implemented using salted access tokens similar to OAuth, but using the files of the codebase to generate the checksum. This would guarantee that if the code executed properly once, it would be the same code running so long as the hash generated did not change (though there's once again no guarantee that the hash being publicly exposed was properly generated)
Such a thing would sound redundant however, on top of the SSL security most services generally tend to use.
The long and short of it is that if you have concerns about the service being offered over a public API, then there is probably a pretty good reason its reputation precedes it.

Considerations regarding a p2p social network

While the are many social networks in the wild, most rely on data stored on a central site owned by a third party.
I'd like to build a solution, where data remains local on member's systems. Think of the project as an address book, which automagically updates contact's data as soon a a contact changes its coordinates. This base idea might get extended later on...
Updates will be transferred using public/private key cryptography using a central host. The sole role of the host is to be a store and forward intermediate. Private keys remain private on each member's system.
If two client are both online and a p2p connection could be established, the clients could transfer data telegrams without the central host.
Thus, sender and receiver will be the only parties which are able create authentic messages.
Questions:
Do exist certain protocols which I should adopt?
Are there any security concerns I should keep in mind?
Do exist certain services which should be integrated or used somehow?
More technically:
Use e.g. Amazon or Google provided services?
Or better use a raw web-server? If yes: Why?
Which algorithm and key length should be used?
UPDATE-1
I googled my own question title and found this academic project developed 2008/09: http://www.lifesocial.org/.
The solution you are describing sounds remarkably like email, with encrypted messages as the payload, and an application rather than a human being creating the messages.
It doesn't really sound like "p2p" - in most P2P protocols, the only requirement for central servers is discovery - you're using store & forward.
As a quick proof of concept, I'd set up an email server, and build an application that sends emails to addresses registered on that server, encrypted using PGP - the tooling and libraries are available, so you should be able to get that up and running in days, rather than weeks. In my experience, building a throw-away PoC for this kind of question is a great way of sifting out the nugget of my idea.
The second issue is that the nature of a social network is that it's a network. Your design may require you to store more than the data of the two direct contacts - you may also have to store their friends, or at least the public interactions those friends have had.
This may not be part of your plan, but if it is, you need to think it through early on - you may end up having to transmit the entire social graph to each participant for local storage, which creates a scalability problem....
The paper about Safebook might be interesting for you.
Also you could take a look at other distributed OSN and see what they are doing.
None of the federated networks mentioned on http://en.wikipedia.org/wiki/Distributed_social_network is actually distributed. What Stefan intends to do is indeed new and was only explored by some proprietary folks.
I've been thinking about the same concept for the last two years. I've finally decided to give it a try using Python.
I've spent the better part of last night and this morning writing a sockets communication script & server. I also plan to remove the central server from the equation as it's just plain cumbersome and there's no point to it when all the members could keep copies of their friend's keys.
Each profile could be accessed via a hashed string of someone's public key. My social network relies on nodes and pods. Pods are computers which have their ports open to the network. They help with relaying traffic as most firewalls block incoming socket requests. Nodes store information and share it with other nodes. Each node will get a directory of active pods which may be used to relay their traffic.
The PeerSoN project looks like something you might be interested in: http://www.peerson.net/index.shtml
They have done a lot of research and the papers are available on their site.
Some thoughts about it:
protocols to use: you could think exactly on P2P programs and their design
security concerns: privacy. Take a great care to not open doors: a whole system can get compromised 'cause you have opened some door.
services: you could integrate with the regular social networks through their APIs
People will have to install a program in their computers and remeber to open it everytime, like any P2P client. Leaving everything on a web-server has a smaller footprint / necessity of user action.
Somehow you'll need a centralized server to manage the searches. You can't just broadcast the internet to find friends. Or you'll have to rely uppon email requests to add somenone, and to do that you'll need to know the email in advance.
The fewer friends /contacts use your program, the fewer ones will want to use it, since it won't have contact information available.
I see that your server will be a store and forward, so the update problem is solved.

How do I secure a connection from a web role to SQL Azure?

We're trying to implement the Gatekeeper Design pattern as recommended in Microsoft Security Best Practices for Azure, but I;m having some trouble determining how to do that.
To give some background on the project, we're taking an already developed website using the traditional layered approach (presentation, business, data, etc.) and converting it over to use Azure. The client would like some added security built around this process since it will now be in the cloud.
The initial suggestion to handle this was to use Queues and have worker roles process requests entered into the queue. Some of the concerns we've come across are how to properly serialize the objects and include what methods we need run on that object as well as the latency inherent in such an approach.
We've also looked setting up some WCF services in the Worker Role, but I'm having a little trouble wrapping my head around how exactly to handle this. (In addition to this being my first Azure project, this would also be my first attempt at WCF.) We'd run into the same issue with object serialization here.
Another thought was to set up some web services in another web role, but that seems to open the same security issue since we won't be able to perform IP-based security on the request.
I've searched and searched but haven't really found any samples that do what we're trying to do (or I didn't recognize them as doing so). Can anyone provide some guidance with code samples? Thanks.
Please do not take this the wrong way, but it sounds like you are in danger of over-engineering a solution based on the "requirement" that 'the client would like some added security'. The gatekeeper pattern that is described on page 13 of the Security Best Practices For Developing Windows Azure Applications document is a very big gun which you should only fire at large targets, i.e., scenarios where you actually need hardened applications storing highly sensitive data. Building something like this will potentially cost a lot of time & performance, so make sure you weigh pro's & con's thoroughly.
Have you considered leveraging SQL Azure firewall as an additional (and possibly acceptable) security measure? You can specify access on an IP address level and even configure it programmatically through stored procedures. You can block all external access to your database, making your Azure application (web/worker roles) the only "client" that is allowed to gain access.
To answer one of your questions specifically, you can secure access to a WCF service using X.509 certificates and implement message security; if you also need an SSL connection to protect data in transit you would need to use both message and transport security. It's not the simplest thing on earth, but it's possible. You can make it so only the servers that have the correct certificate can make the WCF request. Take a look at this thread for more details and a few more pointers: http://social.msdn.microsoft.com/Forums/en-US/windowsazuresecurity/thread/1f77046b-82a1-48c4-bb0d-23993027932a
Also, WCF makes it easy to exchange objects as long as you mark them Serializable. So making WCF calls would dramatically simplify how you exchange objects back and forth with your client(s).

Are services like AWS secure enough for an organization that is highly responsible for it's clients privacy?

Okay, so we have to store our clients` private medical records online and also the web site will have a lot of requests, so we have to use some scaling solutions.
We can have our own share of a datacenter and run something like Zend Server Cluster Manager on it, but services like Amazon EC2 look a lot easier to manage, and they are incredibly cheaper too. We just don't know if they are secure enough!
Are they?
Any better solutions?
More info: I know that there is a reference server and it's highly secured and without it, even the decrypted data on the cloud server would be useless. It would be a bunch of meaningless numbers that aren't even linked to each other.
Making the question more clear: Are there any secure storage and process service providers that guarantee there won't be leaks from their side?
First off, you should contact AWS and explain what you're trying to build and the kind of data you deal with. As far as I remember, they have regulations in place to accommodate most if not all the privacy concerns.
E.g., in Germany such thing is a called a "Auftragsdatenvereinbarung". I have no idea how this relates and translates to other countries. AWS offers this.
But no matter if you go with AWS or another cloud computing service, the issue stays the same. And therefor, whatever is possible is probably best answered by a lawyer and based on the hopefully well educated (and expensive) recommendation, I'd go cloud shopping, or maybe not. If you're in the EU, there are a ton of regulations especially in regards to medical records -- some countries add more to it.
From what I remember it's basically required to have end to end encryption when you deal with these things.
Last but not least security also depends on the setup and the application, etc..
For complete and full security, I'd recommend a system that is not connected to the Internet. All others can fail.
You should never outsource highly sensitive data. Your company and only your company should have access to it - in both software and hardware terms. Even if your hoster is generally trusted someone there might just steal hardware.
Depending on the size of your company you should have your custom servers - preferable even unaccessible for the technicans in your datacenter (supposing you don't own the datacenter ;).
So the more important the data is, the less foreign people should have access to it in any means. In the best case you can name all people that have access to them in any way.
(Update: This might not apply to anonymous data, but as you're speaking of customers I don't think that applies here?)
(On a third thought: There're are probably laws to take into consideration of how you have to handle that kind of information ;)

Resources