What is difference between rpc frameworks like thrift or gSoap and build-in MS RPC if we talk about security configurations. MSDN describes on http://msdn.microsoft.com/en-us/library/windows/desktop/aa379441(v=vs.85).aspx some aspects, so I can presume that there is support from Microsoft in rpc. Does this mean that if i would like to use different frameworks than MS, I need to take care of security by myself?
This is a very broad question. I'm not quite sure what you really expect, but I'll try to do my best to answer your question.
First, of course you have to take care of the security of whatever you are writing, be it server or client code. Security with regard to RPC services is a wide field, and any sophisticated security feature made available to you by a framework is still just a tool, and still only one part of the overall security concept of your service. To put it in another way: Using SSL will not protect your server from SQL-Injection.
Next, Thift , SOAP and MS-RPC each have different design goals. Thrift is designed with performance and portability in mind. Thrift is more focused on the basic RPC to provide efficiency and portability to any application, for any purpose, in the simplest possible way that works. Of course this approach implies, that there are not much higher-level features, because this is considered being out of the scope of Thrift and left to the user. However, for some of the languages TLS (SSL) transports are available.
In contrast, SOAP is a much richer protocol, based on XML as an machine-readable, standardized and extendable format which can be extended to support higher level features like WS-Security, WS-ReliableMessaging and so on. The downside is, that I have seen many frameworks and development tools which - despite the fact that SOAP has been standardized years ago - are still not able to deal with SOAP in the simpest fashion correctly, let alone supporting WS-Security. Yet, even in spite of this and even in spite of the fact, that SOAP messages tend to produce a lot of traffic and give bad performance, SOAP is still widely used in the industry.
MS-RPC as one of the foundations of DCOM is bound very much to the Windows environment and to Windows development tools. If you can live with that limitation and want to use DCOM, then DCOM offers a very high-level abstraction with good and proven support in today's IDEs.
Related
I am attending a training course where they presented the following graphic as the Anatomy of a Typical Java Web Application. Is it too sweeping of a statement, or is it largely accurate?
Here it goes:
(Disclaimer: My experience is drawn mostly from non-Java platforms, though I have some limited experience with Java - but mostly I'm not a fan)
It's accurate - but only for applications using that architecture - which makes this statement somewhat of a tautology.
I'll break it down:
Service Consumer Perspective
A "service consumer" is also more commonly known as a client.
"Service interface files" are not needed to build a client.
I assume by "interface files" it's referring to things like a SOAP WSDL document or Swagger file for REST services. These files are not generally used by clients at runtime but are used to automatically create client class-libraries at design-time - but you can always build a client without any code-generation or reuse of Java interface types.
If it is referring to the reuse of the server/application's Java interface types then the diagram is only accurate for client+server applications that are all-Java and are both designed/created at the same time (which is an old practice from the days of SOAP). Thesedays everything made in the past 5-10 years is RESTful and returns JSON data, but Java interface types are insufficient to model unstructured data like JSON (given you can't model discriminated-unions without concrete classes... in exception handlers, egods, and discriminated-unions are an important tool to model JSON in OOP languages).
Service Provider Perspective
I disagree with the use of the term "Front controller" being used to refer to what is commonly known as a back-end web-service controller, Servlet, or Spring Controller as "front-end" generally refers to the user-facing UI/UX, such as the rendered HTML+JS, an SPA front-end, or rich-client/fat-client (granted, this would be the "service consumer").
You don't need "service metadata" to have a web-application or a web-service - though if you're shipping a web-service designed to be consumed by disparate or non-first-party clients then its a good idea to make a WSDL, Swagger, or whatever metadata or service-description system your platform uses so that your consumers can generate their own strongly-typed clients.
"Service implementer perspective"
So this is my biggest objection: this diagram assumes that the web-service will be 3-tier and the controller/Servlet code is only a thin layer in front of "application" types located elsewhere in the system. While this is common in large-scale and complicated applications where you'll have host-agnostic application code that is designed to be able to run in, for example, an integration-test or unit-test host - or as a desktop application, in my experience I estimate most projects lump all application logic inside the host-specific (i.e. Spring, Servlets, etc) code because it simplifies things greatly (and because those hosts often support testability anyway - and the idea of reusing application code libraries as-is for desktop or mobile applications just doesn't work out well in reality given the massive differences between the disconnected and stateless model of web-service requests compared to the needs of stateful in-process client applications).
In summary: it's not wrong, but I don't believe it accurately describes the majority of (Java) web-applications I've personally dealt with... but this is my subjective opinion and I know that Java web-application and web-service frameworks like Spring and Java EE are designed for and encourage 3-tier architecture, I wouldn't describe them doing-so as an example of the pit-of-success - I feel this is partly due to shortcomings in the Java language design (and the fact these frameworks were designed over 20 years ago before things like generics were added to the language).
I need to write a WebSocket server and I am learning Node JS by reading some books I purchased. This server is for a very fast game so I need to stream small messages to groups of clients as quick as possible.
What is the difference between:
Autobahn | JS : http://autobahn.ws/js/
and
Einaros : https://github.com/einaros/ws
?
I have heard that Autobahn is very powerful and capable to deal with 200k clients without a load balancer so I was wondering if someone with more experience could advise me whether there is any advantage in opting for one or another library.
The functional difference is: Einaros is a WebSocket library, whereas Autobahn provides WebSocket implementations (e.g. AutobahnPython), plus WAMP on top of WebSocket.
WAMP provides higher-level communication for apps (RPC + PubSub - pls see the WAMP website). And AutobahnJS is a WAMP implementation for browsers (and NodeJS) on top of WebSocket.
Now, say you don't care about WAMP, and hence only need a raw WebSocket server. Then you can compare AutobahnPython with Einaros primarily based on non-functional characteristics, like protocol compliance, security and performance.
Autobahn has best-in-class protocol compliance. I dare to say that, since the Autobahn project also provides the quasi industry standard WebSocket testsuite - used by most projects - including Einaros. Autobahn has 100% strict passes on all tests. Einaros probably also - I don't know.
Performance: yes, a single AutobahnPython based WebSocket server (4GB RAM, 2 cores, PyPy, FreeBSD in a VirtualBox VM) can handle 200k connected clients. To give you some more data points: here is a post with performance benchmarks on the RaspberryPi.
In particular, this post highlights the most important (IMO) metric: 95%/99% quantile messaging latency. You shouldn't look only at average latency, since there can be big skews and massive outliers. What you want is consistent low latency.
Achieving consistent low latency is non-trivial. E.g. one factor for languages/run-times like NodeJS or PyPy (a JITted Python implementation) is the garbage collector. Every time the GC runs, it'll slow stuff done - potentially introducing large latencies in messaging. I have done extensive benchmarking (unpublished) which indicates that PyPy's incremental GC is very good in this regard. Better than HotSpot (JVM) and NodeJS (Google V8). When in doubt, and since I haven't (yet) published numbers, you shouldn't believe me, but measure yourself.
The one thing I'd strongly recommend: don't rely on average latency, measure quantiles, do histograms.
Disclose: I am original author of Autobahn and work for Tavendo.
Are CORBA (language agnostic) / RMI (Java) and (D) COM (MS) still relevant today, or is there a technology that has surpassed them?
Cheers,
J
They're not so popular today as modern Java or .Net architectures typically do this type of thing using HTTP based web services.
However, many systems do use these architectures and they are more efficient than web service architectures as they normally use UDP based communication protocols. While these architectures are still in use today they are mostly relegated to legacy and niche market systems in practice.
In some cases RMI is used behind the scenes in java app servers. For example, a bean container may be moved to a separate server from a web app server. Java app servers make this fairly transparent - the bean container can reside on the same server through local calls or on a different server through RMI. With the right application architecture it's just a config item and the app server can do all the remoting behind the scenes.
DCOM gets used similarly with COM+ apps. However, COM+ is largely a legacy architecture on Windows. It was popular with VB6 but that is all but deprecated.
CORBA had a somewhat deserved reputation for complexity due to its design-by-committee roots. However, it pops up in a lot of unexpected places. For example, earlier versions of GNOME made use of a CORBA based component model called Bonobo but this has largely been replaced with D-BUS in current versions. Apart from legacy system infrastructure it has a few niche markets (mostly low latency applications) that benefit from its characteristics such as the UDP based transport mechanism.
Java EE EJBs still use RMI and CORBA as their wire protocol.
Perhaps that's one reason why HTTP web services, be they based on RPC-XML, SOAP, or REST, are ascendant. Simple and open usually win.
Security always tends to take the last place in a new project. Or you use a framework like Spring where security is already build-in and can be switched on easily.
I try to find an open security framework that can be plugged-in to both Swing and Web applications (and JavaFX?), maybe easy to digest. I looked at plain JAAS, JGuard and JSecurity but its just too complicated to get started.
Any recommendations or experience to share ?
I am working with NB, Glassfish and MySQL.
Thanks
Sven
I have just taken a view of this http://shiro.apache.org/
Apache Shiro is a powerful and
easy-to-use Java security framework
that performs authentication,
authorization, cryptography, and
session management. With Shiro’s
easy-to-understand API, you can
quickly and easily secure any
application – from the smallest mobile
applications to the largest web and
enterprise applications.
I would strongly recommend learning JAAS. It really isn't that difficult to pick up, and there are some useful tutorials and a reference guide on the Sun web site.
In my experience, JAAS is pretty widely used, so it's definitely something you'll be able to reuse once you're learnt it. It also happens to be one of the building blocks for the Glassfish authentication mechanism!
I have done a similar research in JAAS for web application and has ran into a "mind roadblock" until I finally realize JAAS is a framework addressing security at a different "layer" then traditional web applications in the Java World. It is build to tackle security issues in J2SE not J2EE.
JAAS is a security framework build for securing things at a much lower level then web-application. Some example of these things are code and resources available at the JVM level, hence all these ability to set policy files in the JVM level.
However, since J2EE is build on top of J2SE, a few modules from JAAS was reuse in J2EE security such as the LoginModules and Callbacks.
On the other hand, Acegi, aka Spring Security, tackles a much higher "layer" in the securing web-application problem. It is build on top of J2EE security hence J2SE hence JAAS. Unless you are looking to secure resources in the J2SE level (classes, System resources), I don't see any real use of JAAS other than the using the common class and interfaces. Just focus on using Acegi or plain old J2EE security which solves a lot of common web application security problems.
At the end of the day, it is important to learn which "layer" of the J2EE-J2SE security issue you are tackling and choose the write tool(s) for the problem.
I would recommend you take a look at OACC (http://oaccframework.org). OACC was designed for solving the problem of application security. Unlike most frameworks OACC is able to store/manage the authorization relationships in your application. OACC's authorization model is more powerful that Shiro or Spring Security.
There is alternative from JBoss. A new version for PicketBox. More information here:
https://docs.jboss.org/author/display/SECURITY/Java+Application+Security
apache shiro miserably fails when you stress a web application under JBoss (say 2 million requests of a simple GET with a concurrency of 50 threads).
was very dissapointing to find out this.
it happens when you use filters.
You can read http://code4reference.com/2013/08/guest-posttop-java-security-frameworks-for-developing-defensive-java-applications/
It gives 1000mile view from various Java Security framework, such as JAAS, Shiro or Spring Security. All are depended on your requirements and technology stacks that you choose
Recently, I've become quite involved experimenting with lightweight grid frameworks (Hazelcast, Gigaspaces, Infinispan).
However, I've been somewhat surprised than none of the free frameworks I tried has any ACL or role based security features built in (Gigaspaces does have some measures).
What approaches are generally used to compensate for this? Am I supposed to only use the grid to share data between trusted server-side applications and use the traditional Java EE stack (i.e. a conventional DAO-layer) to access data from client or non-trusted server applications?
Are there any grid frameworks that provide ACL capabilities for accessing data in the grid (I'd be happy with some ad-hoc stuff, although complying to Java EE role concepts would be nice)?
This is my opinion on current state of open source distributed cache solutions (e.g. JBoss Cache and Infinispan). As a baseline I am using GigaSpaces commercial caching product. Let me know what you think about open source and proprietary cache products.
read more at: http://bigdatamatters.com/bigdatamatters/2009/09/infinispan-vs-gigaspaces.html