Need clarification of what is this Public API in JavaFX? - javafx-2

I heart a lot about the term Public API from JavaFX speakers.
What is Public API supposed to mean ?
Conversely, is there a Private API ?

The most important things, you should understand about private and public API:
At first, technicly, you can use both of them.
At the second. when developers develop some programs, they think - this is for users, and this is our implementation. And when time goes, programs can change. And when developers add changes - if it is a change about private API - they do everything they want, just passing internal code review process. But, when appears a need to change a public API - they start to think. If it is really needed, and no other solution can be done, they change public API, and provide all info about the change (release notes for instance).
It is common process - to think previously, and develop separation between public and private APIs, so that user can have stable API of product functionality.
At the third : about JavaFX : its private APIs are living. Product is under development, and these APIs are changing rapidly. Public APIs, are mostly stable, and the most common thing about public API change - when class publicly available methods are added (new functionality).
If you want/need to use some functionality from private API - it is your risk. It can be changed, or even removed. Developers side is not under responsibility to keep private functionality in the product, or with stable API.
Finally, you can file an RFE in javafx-jira, to request making its functionality publicly available. And if it really nice idea (decided by developers; may be after public discussion; making attention on voting results) - it could be done.

Public API is not a JavaFX term.
It is a term that applies to Java programs generally.
The Public API of a program is all of the following found in all its public classes:
fields of classes that are not private and not package-private
interfaces that are not package-private
methods that are not private and not package-private
member classes that are not private and not package-private
A synonym for Public API is "exported API". These are the features of your application that become accessible to other programmers who want to use your application in their own development.
The "private API" is simply those other features of your program that are not covered by the above. These private memebers of your classes are your "implementation details" and you are free to change or edit them as much as you wish as long as such changes do not impact the public API.
The ability to distinguish the implementation details from the exported API is an important concept in object-oriented programming in general and of Java programming in particular. This ability is often referred to as "encapsulation".

All classes in javafx-namespace are the public API and you won't get broken in future releases of JavaFX, classes found in com.sun.javafx in contrast are private and can change from release to release

Related

What is a bouncycastle provider used for in terms of digital pdf?

I'm reading here
and also http://www.bouncycastle.org/wiki/display/JA1/Provider+Installation
and also itext's white paper on digital signature.
here's a sniplet of itext's sample code:
BouncyCastleProvider provider = new BouncyCastleProvider();
Security.addProvider(provider);
KeyStore ks = KeyStore.getInstance("pkcs12", provider.getName());
ks.load(new FileInputStream(path), pass);
Question: What is a security provider and what is it used for? Itext code uses the bouncycastle provider. Is it basically code used to hash the pdf and then later the private key is used to encrypt the hash? And what is the role of the "Security" library above where it says Security.addProvider(provider).
Thanks.
A security provider provides algorithm services to the runtime. These are implementations of algorithms, for instance Bouncy Castle adds a lot of algorithm implementations that extend CipherSpi (Spi means service provider implementation). Oracle provides CipherSpi classes as well, but it is limited to certain algorithms. These services are also used to implement e.g. KeyStoreSpi for "pkcs12", to make this more specific to your question.
Besides providing support for extra algorithms, providers can also be used to extend the functionality of the API, provide support for hardware tokens (smart cards, HSM's), specific key stores, faster implementations etc. . Bouncy however is mainly used because it extends the number of algorithms available. Usually you don't specify the provider name when requesting an algorithm, letting the system choose for you. But sometimes the algorithm provides (or provided) some specific advantage to the one in the Oracle providers (e.g. "SunJCE"). It may make sense to explicitly choose the provider as in your example code.
The Security class is a register. It can be used by the system to look and list the services present in the provider, using their names (as string) and aliases. To have an idea how this works, please try my answer here.

SaaS - How to prove users/client that they are using the same code always in the server?

Let's suppose we have an open source project running in a server.
Is there a common way to prove users that we're using the same code as the one published?
There is never an implicit guarantee that the remote service is what's described in its manifest, though generally the reputation of the service is what's directly considered.
What's more, SaaS itself is just a delivery model, and doesn't necessarily define a set of protocols or contracts between a client and a service. It merely defines an approach to building and serving a public platform. It's a term more relevant for describing the building process of a service and it's intended market than it is for describing the nitty-gritty operational details.
If such a thing needed to be implemented as part of the contract between the client and server, one could look at implementing a native hashing solution using HMACs. An identity mechanism could be implemented using salted access tokens similar to OAuth, but using the files of the codebase to generate the checksum. This would guarantee that if the code executed properly once, it would be the same code running so long as the hash generated did not change (though there's once again no guarantee that the hash being publicly exposed was properly generated)
Such a thing would sound redundant however, on top of the SSL security most services generally tend to use.
The long and short of it is that if you have concerns about the service being offered over a public API, then there is probably a pretty good reason its reputation precedes it.

What does "public api" mean in Semantic Versioning?

I'm learning about how to assign and increment version numbers with the rule called "Semantic Versioning" from http://semver.org/.
Among all its rules, the first one said:
Software using Semantic Versioning MUST declare a public API. This API could be declared in the code itself or exist strictly in documentation. However it is done, it should be precise and comprehensive"
I am confused about "public API". What does it refer to?
Public API refers to the "point of access" that the external world (users, other programs and/or programmers, etc) have to your software.
E.g., if you're developing a library, the public API is the set of all the methods invokations that can be made to your library.
There is understanding that, unless a major version changes, your API will be backwards-compatible, i.e. all the calls that were valid on a version will be valid on a later version.
You can read at point 9 of those rules:
Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API.
I discovered SemVer today and read up on it from several sources to ensure I had fully grasped it.
I am confused about "public API". What does it refer to?
I was also confused about this. I wanted to set about using SemVer immediately to version some of my scripts, but they didn't have a public API and it wasn't even clear to me how they could have one.
The best answer I found is one that explains:
SemVer is explicitly not for versioning all code. It's only for code that has a public API.
Using SemVer to version the wrong software is an all too common source
of frustration. SemVer can't version software that doesn't declare a
public API.
Software that declare a public API include libraries and command line
applications. Software that don't declare a public API include many
games and websites. Consider a blog; unlike a library, it has no
public API. Other pieces of software cannot access it
programmatically. As such, the concept of backward compatibility
doesn't apply to a blog. As we'll explain, semver version numbers
depend on backward compatibility. Because of this dependence, semver
can't version software like blogs.
Source: What Software Can SemVer Version?
It requires a public API in order to effectively apply it's versioning pattern.
For example:
Bug fixes not affecting the API increment the patch version
Backwards compatible API additions/changes increment the minor
version, and...
Backwards incompatible API changes increment the major version.
What represents your API is subjective, as they even state in the SemVer doc:
This may consist of documentation or be enforced by the code itself.
Having read the spec a few times,
Software using Semantic Versioning MUST declare a public API. This API could be declared in the code itself or exist strictly in
documentation. However it is done, it should be precise and
comprehensive.
I wonder whether all it means is that the consumers of your software must be able to establish the precise "semantic" version they are using.
For example, I could produce a simple script where the semantic version is in the name of the script:
DoStuff_1.0.0.ps1
It's public and precise. Not just in my head :)
Semantic versioning is intended to remove the arbitrariness that can be seen when someone decides to select a versioning scheme for their project. To do that, it needs rules, and a public API is a rule that SemVer chose to use. If you are building a personal project, you don't need to follow SemVer, or follow it strictly. You can, for example, choose to loosely interpret is as
MAJOR: Big new feature or major refactor
MINOR: New feature which does not impact the rest of the code much
PATCH: Small bug fix
But the vagueness of this loose interpretation opens you up to arbitrariness again. That might not matter to you, or the people you foresee who will be using your software.
The larger your project is, the more the details of your versioning scheme matters. As someone who has worked in a third level support for a large IT company (which licenses APIs to customers) for quite some time, I have seen the "is it a bug or is it a feature" debate many times. SemVer intends to make such distinctions easier.
A public API can, of course, be a REST API, or the public interface of a software library. The public/private distinction is important, because one should have the freedom to change the private code without it adversely affecting other people. (If someone accesses your private code through, say, reflection, and you make a change which breaks their code, that is their loss.)
But a public API can even be something like command line switches. Think of POSIX compliant CLI tools. These tools are standalone applications. But they are used in shell scripts, so the input they accept, and the output they produce, can matter. The GNU project may choose to reimplement a POSIX standard tool, and include its own features, but in order for a plethora of shell scripts to continue working across different systems, they need to maintain the behaviour of the existing switches for that application. I have seen people having to build wrappers around applications because the output of the version command changes, and they had scripts relying on the output of the version command to be in a certain form. Should the output of the version command be part of the public API, or is what those people using it did a hack? The answer is that it is up to you and what guarantees you make to the people using your software. You might not be able to imagine all use cases. Declaring the intended use of your software creates a contract with your users, which is what a public API is.
SemVer, thus, makes it easier for your uses to know what they are getting when upgrading. Did only the patch level change? Yes, so better install it quick to get the latest patch fix. Did the major version change? Better run a full, potentially expensive, regression test suite to see if my application will still work after the upgrade.

MEF: Component authentication

I am building a Windows (Service) application that, in short, consists of a "bootstrapper" and an "engine" (an object loaded by the bootstrapper, which transfers control to it, and then performs the actual tasks of the application). The bootstrapper is a very basic startup routine that has few features that are likely to change. But the engine itself could be subject to upgrades after installation, and I am implementing a mechanism so that it can upgrade itself - by contacting a "master server" and checking its version number against a "most current" version. If there is a newer version of the engine available, it will download it into a designated folder and call a method in the bootstrapper to "restart".
So, whenever the bootstrapper starts up, it uses MEF to "scan" the appropriate directories for implementations of IEngine, compares their bootstrapper compatibility numbers and picks the newest compatible engine version. Then it transfers control to the engine (which then, in turn, performs the update check etc). If there are no eligible IEngines - or MEF fails during composition - it falls back on a default, built-in implementation of IEngine.
This application will be running on a remote server (or several), and the whole rationale behind this is to keep manual application maintenance to a minimum (as in not having to uninstall/download new version/reinstall etc).
So, the problem: Since the bootstrapper effectively transfers program execution to a method on the IEngine object, a malicious IEngine implementation (or impersonator) that somehow found its way to the application's scanned folders could basically wreak total havoc on the server if it got loaded and was found to be the most eligible engine version.
I am looking for a mechanism to verify that the IEngine implementation is "authentic" - as in issued by a proper authority. I've been playing around with some home brewn "solutions" (having IEngine expose a Validate function that is passed a "challenge" and has to return a proper "Response" - in various ways, like having the bootstrapper produce a random string that is encrypted and passed to the engine candidate, which then has to decrypt and modify the string, then hash it, encrypt the hash and return it to the bootstrapper which will perform a similar string modification on its random string, then hash that and compare that hash to to the decrypted response (hash) from the candidate etc), but I'm sure there are features in .Net to perform this kind of verification? I just looked at Strong Naming, but it seems it's not the best way for a system that will be dynamically loading yet unthought-of dlls..
Input will be greatly appreciated.
Assemblies can be digitally signed with a private key. The result is called a strong named assembly.
When a strong named assembly is loaded, .NET automatically checks whether its signature matches the embedded public key. So when a strong named assembly has been loaded, you have the guarantee that the author posseses the private key that corresponds to that public key.
You can get the public key by calling Assembly.GetName().GetPublicKey() and then compare it to the expected one, i.e. yours.
You can scan over the plugin assemblies, create an AssemblyCatalog for each one with the right public key (rejecting the others), finally aggregating them into an AggregateCatalog and building a CompositionContainer with it.
This is basically what Glenn Block also explained in this thread. (Best ignore the blog post linked there by Bnaya, his interpretation of StrongNameIdentityPermission is not correct.)
edit with responses to the wall of comments:
To get that public key, I make the
console application output the public
key byte array to somewhere. I embed
the byte array in my host application,
and subsequently use that to compare
against the public keys of plugin
candidates. Would that be the way to
do it?
Yes, but there is a simpler way to extract the public key. Look at the -Tp option of sn.exe.
Does this mechanism automatically prevent a malicous plugin assembly from exposing a
correct, but "faked" public key? As in, is there some mechanism to disqualify any assembly
that is signed, but has a mismatch between its exposed public key and it's internal
private key, from being loaded/run at all?
As far as I know, the check happens automatically. A strong named assembly cannot be loaded (even dynamically) if its signature is wrong. Otherwise the strong name would be useless. To test this, you can open your strong named assembly in a hex editor, change something (like a character in a const string embedded in the assembly) and verify that the assembly can no longer be loaded.
I guess what I was referring to was something akin to the type of hack/crack described here:
http://www.dotnetmonster.com/Uwe/Forum.aspx/dotnet-security/407/Signed-assemblies-easily-cracked
and here: Link
[...snip more comments...]
However, this can - apparently - be bypassed by simple tampering (as shown in first link, > and explained more here): grimes.demon.co.uk/workshops/fusionWSCrackOne.htm
The "attacks" you refer to fall in three categories:
removing the strong name altogether. This does not break the authentication, the assembly will no longer have a public key and so you will reject it.
disabling the strong name check, which requires full access to the machine. If this was done by an attacker, then it would mean that the attacker already owns your machine. Any security mechanism would be meaningless in such a context. What we are actually defending against is an attacker between the machine and the source of the assemblies.
a real exploit made possible by a bug in .NET 1.1 that has since been fixed
Conclusion: strong names are suitable to use for authentication (at least since .NET 2.0)
I've written a blog post with source code for a catalog which only loads assemblies with keys that you specify: How to control who can write extensions for your MEF application

Is the public UDDI movement dead or, was it ever alive?

I am trying to find some public UDDI registries to interact with, for learning purposes. But it seems there are none available. I popped the following question on SO to see if someone knows about any public registry still hosted, but got no answers.
The IBM, Microsoft and SAP public registries were a test of the UDDI technology. I quote from here: The primary goal of the UBR was to prove the interoperability and robustness of the UDDI specifications through a public implementation. This goal was met and far exceeded.
They now continue to support the UDDI specifications in their products (so, different companies can host their UBRs for private use).
Now, I am changing my original question to this: Is the public UDDI movement dead or, was it ever alive?
What do you think? If your answer is no, can you provide an example of an existing public UDDI UBR?
Public UDDI is indeed dead, but it managed to survive in private registries inside enterprises.
A UDDI registry's functional purpose is the representation of data and metadata about
Web services. A registry, either for use on a public network or within an
organization's internal infrastructure, offers a standards-based mechanism to classify,
catalog, and manage Web services, so that they can be discovered and consumed by
other applications.
http://uddi.org/pubs/uddi-tech-wp.pdf
This isn't bad for a definition and purpose, unfortunately it was applied at the web level.
UDDI was supposed to be the "yellow pages" of web services. If you wanted to find a web service providing a certain functionality, you would look it up inside the UDDI.
The idea was to use a standard (universal) mechanism for online interaction between SOA businesses components. You then dynamically looked up services, connected to them and do business automatically. And the decision for choosing between similar services was supposed to happen based on the metadata found in the UBR (all of it inside a very complex model which discouraged adoption) with no way of checking if the service actually did what you were expecting it to do.
But bringing every interaction to a common ground was impossible because businesses are highly heterogeneous. And businesses still revolves around people, human activity and human decisions.
Business are conducted between partners that choose to make business with each other only after thorough analysis and negotiation, before finally striking a business deal and agree on all terms and conditions. Only then their infrastructures are connected. And at this point the UDDI definition does start to make sense, because within the enterprise UDDI allows you to:
relocate services without any of the clients failing;
supports load balancing;
improves efficiency by reducing manual interventions within the infrastructure;
manage redundancy (if one service fails the clients will search for another service providing the same functionality);
etc
.. but all of this within a confined set of predetermined services who's functionality is well established and agreed upon.
I received an answer from John Saunders on my original question, to one of my comments, and I think he is right.
To summarize it:
The public UDDI movement is dead because the IBM, Microsoft and SAP public registries were the UDDI movement.
UDDI is indeed dead. Three things killed it:
Overambitious complexity
Ignoring security
The difficulty, still with us, of managing and collecting micropayments
If a UDDI broker dynamically chooses a service provider for me, I have no opportunity to do any due diligence on the security of the service. And how much trouble would the broker take to ensure security for me? Not a lot, I would suggest.
Web services are commonly used behind the firewall for SOA purposes, to integrate applications with business partners, and to call well-known APIs. UDDI is total overkill for these purposes. A large organisation should have a catalogue of its web services, but that could be as simple as a wiki page. A developer looking for a potentially useful web service needs a one paragraph description of what it does, a contact person, and some WSDL and technical documentation. UDDI is not necessary for any of that.
Not dead.
Apache jUDDI has a public snapshot available online
http://demo.juddi.apache.org/

Resources