What is OpenCMIS Bridge? - cmis

I just noticed this project at Apache OpenCMIS:
https://svn.apache.org/repos/asf/chemistry/opencmis/trunk/chemistry-opencmis-bridge
There is no description, no documentation, and reading the code does not give many hints about what it is supposed to do.
Apache OpenCMIS sometimes releases great software silently, with little communication, so we might be missing another great piece of software here.
A Google Search for "OpenCMIS Bridge" returns only source code and the bare download page.

The OpenCMIS Bridge works like a proxy server. It accepts CMIS requests and forwards them to a CMIS server. On the way it can change the binding, and filter, enrich and federate data.
Here are few use cases:
If a repository does not support the CMIS 1.1 browser binding, you can put the OpenCMIS Bridge in front of it. The bridge then could talk JSON to the client and AtomPub to the server. The client wouldn't notice that the server doesn't support the browser binding.
Code can be added to the bridge to redact property values or filter whole objects when they are transferred through the bridge. That could add another level of security that the native repository doesn't support.
Code can also be added to add or enrich object data. For example, property values could be translated from cryptic codes into readable values. Virtual secondary types can be added on the fly. Or additional renditions could be provided.
The bridge can also be used to provide different views of multiple repositories. Repositories of different vendors can be access through one unified endpoint. It's possible to build one virtual repository across multiple backend repositories that then, for example, allows a federated query across all backends.
The OpenCMIS Bridge is only a framework, though. It just provides the infrastructure and the hooks to add your own code and rules.
If you are looking for a real world application, check SAP Document Center (formerly "SAP Mobile Documents"). It is based on the OpenCMIS Bridge.

Related

Create multiple front-ends hitting same data source

I want to create and host 4-5 websites using the same database. The only difference between the sites will be:
branding (colours and header)
data will be filtered per website (through sql query) and
Each site will be on a separate domain (but can be hosted on same server)
My 1st thought was to use API / Rest model and provision five front-ends in their own sub-domain. But as sites can be hosted on same server (I'm assuming one hosting account which enables multiple sub-domains), I think I can simply connect all sites with connection string to same database, avoiding complexities of using REST.
Is this possible and would i run into database conflicts doing this?
If later, I wanted to add a mobile app client, then will I need to build out a rest interface anyway?
Thanks
The right thing to do here depends a lot on your specific use case, expected load, preferred backend/edge technology, future plans, etc.
Site domains and servers -
The main point here is that you can host your domains/subdomains on the same or different servers. You simply need to update the DNS to point to the correct IP (update the subdomain's A record).
Note: If these sites are all public-facing, then I highly recommend using an edge/proxy server and even consider a load balancer, depending on expected number of visitors (Nginx, or Apache Web Server)
Decoupled architecture is almost always preferred -
I would definitely have an API/REST layer to abstract the database from the sites. This ensures that you establish a contract through which any clients can interact with the backend, including your mobile application. You also don't have to duplicate DB-specific code across the various clients. What if you decided to change your schema? Or even your database solution? Then all clients will be broken and your customers would be unhappy. As a guiding principle, think: if I change any one thing in my architecture, how many other things will need to change as a result? In terms of scalability, this architecture will also allow you to easily spin up more instances of whatever it is you need (databases, REST service, etc) should the need arise.
How do I build and deploy a REST API?Re: #2, to set up a simple custom REST service running on Node.js (and express), this is a good tutorial. The example also walks through setting up and integrating with an in-memory MongoDB database.
Database collisions?If you follow the above steps, this should be a moot point. Node.js/express and the databases expose ways to configure connection pools if the defaults do not suffice. Again, this will depend on your needs - how many concurrent users you expect.

Using hawtio to host multiple applicatoins

Is it recommended to use hawtio as a host for several small user interfaces? What we have are a lot of discreet services performing fairly focused tasks each with it's own (angularjs) UI for configuration and management. A thought I had was that we might deploy each of these UIs so that they could be incorporated into hawtio where they would live on individual tabs.
Additionally we would want to have some kind of authentication/authorization to limit which tabs users could see. For example we would not want everyone to see the JBoss or Camel tabs but we would want them to see the UIs that we created for the individual services (and probably levels of authorization within them).
Is this even a reasonable use for hawtio?
You can build your custom distribution of hawtio, or turn off various plugins etc by providing a html file where you can turn off them in the various perspectives.
There is some examples how to do that at
https://github.com/hawtio/hawtio/tree/master/hawtio-plugin-examples/custom-perspective
You can build 3rd party plugins for your own apps and have them integrated as first-class in hawtio.
hawtio is designed as a pluggable web console.

ServiceStack Development Tooling?

Not sure if this is the most effective place to ask this question. Please redirect if you think it is best posted elsewhere to reach a better audience.
I am currently building some tooling in Visual Studio 2013 (using NuPattern) for a project that implements standard REST service using the ServiceStack framework. That is, the tooling helps you implement REST services, that meet a set of design rules and guidelines (in this case advocated by the ApiGee guidelines) for good REST service design.
Based on some simple configuration by the service developer, for each resource they wish to expose as a REST endpoint, with any number of named verbs (of type: GET, PUT, POST or DELETE) the tooling generates the following code files, with conventional names and folder structure into the projects of your own solution in Visual Studio (all in C# at this point):
The service class, and the service interface containing each named verb.
Both the request DTO and response DTOs, containing each named field.
The validator classes for each request DTO, which validates each request DTO field.
a manager class (and interface) that handles the actual calls with data unwrapped from DTOs.
Integration Tests that verify each verb with common edge test cases, and verifies status codes, web exceptions, basic connectivity.
Unit Tests for each service and manager class, that verify parameters and common edge cases, and exception handling.
etc.
The toolkit is proving to be extremely useful in getting directly to the inner coding of the actual service, by taking care of the SS plumbing in a consistent manner. From there is basically up to you what you do with the data passed to you from the request DTOs.
Basically, once the service developer names the resource, and chooses the REST verbs they want to support (typically any of these common ones: Get, List, Create, Update, Delete), they simply jump straight to the implementation of the actual code which does the good stuff rather than worrying about coding up all the types around the web operations and plumbing them into the SS framework. Of course we support nested routes and that good stuff so that your REST API can evolve appropriately.
The toolkit is evolving as more is learned about building REST services with ServiceStack, and as we want to add more flexibility to it.
Since there is so much value being discovered with this toolkit in our specific project, I wanted to see if others in the ServiceStack community (particularly those new to it or old hands at it) would see any value in us making it open source, and let the community evolve it with their own expertise to help others move forward quicker with ServiceStack?
(And, of course, selfishly give us a chance to pay forward to others, out of respect for the many contributions others have selflessly made in the ServiceStack communities that have helped us move forward.)
Let us know what you think, we can post a video demonstrating the toolkit as it is now so you can see what the developers experience is currently.
Video walkthrough of the VS.NET Extension
A video walking through of the workflow is available on:
http://www.youtube.com/watch?v=ejTyvKba_vo
Toolkit Project
The toolkit is now available here:
https://github.com/jezzsantos/servicestacktoolkit

Securing elasticsearch

I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html

Fetching data from cerner's EMR / EHR

I don't have much idea in medical domain.
We evaluating a requirement from our client who is using Cerner EMR system
As per the requirement we need to expose the Cerner EMR or fetch some EMR / EHR data and to display it in SharePoint 2013 portal.
To meet this requirement what kind of integration options Cerner proposes. Is there any API’s or Web services exposed which can be used to build custom solutions for the same?
As far as I know Cerner did expose EMR / EHR information in HL7 format, but i don't have any idea how to access that.
I had also requested Cerner for the same awaiting replies from their end.
If anybody who have associated with similar kind of job can through some light and provide me with some insights.
You will need to request an interface between your organization and the facility with the EMR. An interface in the Health Care IT world is not the same as a GUI. Is is the mechanism (program/tool) that transfers HL7 data between one entity and the other. There will probably be a cost to have an interface setup. However, that is the traditional way Cerner communicates with 3rd parties. HIPAA laws will require that this connection be very secure.
You might also see if the facility with the EMR has an existing interface that produces the info you are after. You may be able to share that data or have a flat file generated from that interface that you could get access to. Because of HIPAA regulations, your client may be reluctant to share information in that manner.
I would suggest you start with your client's interface/integration team. They would be the ones that manage the information into and out of Cerner. They could also shed some light on how they prefer to see things done.
Good Luck
There are two ways of achieving this as I know. One is a direct connectivity to Cerner's Oracle database. This seems less likely to be possible as Cerner doesn't allow other vendors to have a direct access to their database.
The other way is to use Cerner's mPage Web Services. We have achieved this using mPage Web Services. The client needs to host the web services on a IBM WAS or some other container. We used WAS as that was readily available to us. Once the hosting is done, you will get a URL and using that you can execute any CCL program which will return you the data in JSON/XML format. mPage webservice has a basic HTTP authentication.
Now, CCL has to be written in a way which can return you the data you require.
We have a successful setup and have been working on this since 2014. For more details you can also try uCERN portal.
Thanks,
Navin

Resources