Eclipse IoT Binding Hawkbit-Ditto-Hono - eclipse-hono

I am trying out Eclipse IoT Framework on my RP3.
On the host-side (local PC backend) I would like to bind Hawkbit with Ditto and at least Ditto with Hono. From here and here I can read that it is still not possible to connect Hawkbit with Ditto? Right?
If not is there a way to connect Hawkbit to hono?
thanks
ajava
Update
I think it is not already clear what am I supposed to achieve. Well, at the moment any Edge-Device can communicate and register itself at HawkBit via DDI-Api. On the other hand, these edge-devices also communicate through the chain: Hono->Ditto->App, with the backend-specific applications. This communication path is standardized by AMQP.
Now my Questions:
Hawkbit and Hono both maintain separately their own device/tenant repository. Fortunately, I see some efforts to merge them together here. But it seems to me that it's not still part of the official release.
Correct? If not I'm thankful for any help and suggestions, how to maintain only one repo.
Using Hawkbit through DDI-Api and not to be able to include it in the IoT-Chain (Hono->Ditto->Hawkbit) makes it just like a foreign body. So it would be helpful if one could also use the DMF-API of Hawkbit to connect it to either Hono or better than that to Ditto. In my opinion, it is still not possible, or did I misunderstood something here?
Thanks and best regards
Arash

ad 1) You are right, they use separate repositories for their data. The PR you referred to is just a work around FMPOV as it basically tries to sync hawkBit's DB with Hono's registry. However, the only other solution I currently see is to put a separate provisioning service in front of Hono and hawkBit which takes care of provisioning tenants and devices to both hawkBit and Hono using their respective APIs.
ad 2) You are right in that there currently is no integration between hawkBit and Hono/Ditto regarding the DMF API. There is some work under way which will allow hawkBit to use Ditto as its repository but I do not know the current status of that nor when it will be available. Maybe you can ask about it on one of hawkBit's community channels?

Related

Connect new peer to existing Hyperledger form client side

I just setup a hyperledger network in single local system. I Already done to add new Org to an existing network using the tutorial here . I need to know something that
Q. Is possible to use hyperledger to add new peers(or run new peers) from client side..??
The short answer is yes, it is possible.
It has been done already, for example IBM has a full Hyperledger offering where everything is configurable via APIs. You can create channels, orgs, peers, join them etc, anything you need to build and manage a network is there.
What you could do is to build an API, which is capable of executing scripts on the machine(s) where your network is deployed. You could build such API in a language of your choosing, secured in some way and you could offer a few endpoint which do what you need to do.
First step in doing this is creating the script(s) which can do what you need. Make sure you have a repeatable process, write down all the commands you need, from the URL you already visited, step by step, until you go from nothing to the simplest version of the network you're happy with.
Create a script with all the commands you've already tested. Make sure the script can take the parameters it needs so they can be later passed via the API, like channel name, org name whatever your script creates basically.
Create an API with an endpoint which can called, has the required permissions and runs in the right location and can execute the script you just created. Make sure you pass the parameters you need from the API endpoint, to the script.
Now you can call the API from the client side.
Things to consider, the API needs to be secured somehow.
So, Yes it is possible and No, it is not an easy job.
Is it worth doing it? Only you can answer that question to be honest.
You would also need to keep up with the changes introduced by each version increment so that's even more work to consider.

How we can achieve Microservices related functionality with Loopback Framework

I need your help in Loopback Framework.
Actually, my need is how we can achieve Microservices related functionality with Loopback Framework.
Please share any links/tutorials/knowledge if you have any.
I have gone through below links,
https://strongloop.com/strongblog/creating-a-multi-tenant-connector-microservice-using-loopback/
I have downloaded the related demo from below links but doesn't work it.
https://github.com/strongloop/loopback4-example-microservices
https://github.com/strongloop/loopback-example-facade
Thanks,
Basically it depends on your budget and size of your system. You can make some robust and complex implementations using tools like Spring Cloud or KrakenD. As a matter of fact, your question is too broad. I've some microservices architecture knowledge and I can recommend just splitting your functionality into containerized solutions, probably orchestrated by Kubernetes. In that way, you can expose for example, the User microservice with loopback, and another Authentication microservice with loopback and/or any other language/framework.
You could (but shouldn't) add communication between those microservices (as you should expose some REST functionality) with something like gRPC.
The biggest cloud providers have some already made solutions, eg AWS has ECS or Fargate. For GCP you have Kubernetes.
We have created an open source catalog of microservices which can be used in any microservice project using LB4. Also, you can get an idea of how to create microservices using LB4. https://github.com/sourcefuse/loopback4-microservice-catalog

Spring Cloud: ZUUL + Eureka + NodeJS

I am new to Spring-Boot(Cloud) and going to work on one new project.
Our project architect have designed this new application like this:
One front end Spring boot application(which is also microservice) with Angular-2.
One Eureka server, to which other microservices will connect.
ZUUL proxy server, which will connect to Front end and mircoservices.
Now, followings are the things I am confuse about and I can't ask him as he too senior to me:
Do I need to have separate ZUUL proxy server ? I mean, what is the pro and cons of using same front end application as ZUUL server?
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why ? because I can directly invoke ReST api of NodeJS-1 from the Microservice-1.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
Can anyone who have worked with similar kind of scenario shed some light on my doubts?
I do not have the full context of the problem that you are trying to solve, therefore answers bellow are quite general, but still may be useful:
Do I need to have separate ZUUL proxy server? I mean, what is the pro and cons of using same front end application as ZUUL server?
Yes, you gonna need a separate API Gateway service, which may be Zuul (or other gateways, e.g tyk.io).
The main idea here is that you can have hundreds or even thousands of microservices (like Amazon, Netflix, etc.) and they can be scattered across different machines or data centres. It would be really silly to enforce your API consumers (in your case it's Angular 2) to memorise all the possible locations of each microservice. Better to have one API Gateway that knows about all the services under it, so your clients can call your gateway and have access to the underlying services through one single place. Also having one in your system will decouple your clients from your services, so it's possible to evolve them independently.
Another benefit is that you can have access control, logging, security, etc. in one single place. And, BTW, I think that you are missing one more thing in your architecture - it's Authorization Server. A common approach in building security for microservices is OAuth 2.0.
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why? because I can directly invoke ReST API of NodeJS-1 from the Microservice-1.
I think you could use Sidecar, but I have never used it. I suppose that the question 'why' is related to the Discovery Service (Eureka in your architecture).
You can't call microservice NodeJS-1 directly because there may be several instances of NodeJS-1, which one to call? Furthermore, you can't know whether service is down or alive at any given point in time. That's why we use Discovery Services like Eureka - they handle all of these. When any given service has started, it must register itself in Eureka. So if you have started several instances of NodeJS-1 they all will be registered in Eureka and whenever Microservice-1 wants to call NodeJS-1 service it asks Eureka for locations of live instances of NodeJS-1. The service then chooses which one to call.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
I can only assume that the NodeJS has been chosen because it has an outstanding performance for IO operations, including HTTP requests that may come in hand when calling 3-rd party services. I do not have any other rational explanations for this.
In general, microservices give you a possibility to have your microservices written in different languages and it's cool indeed since each language solves some problems better than the other. On the other hand, this decision should be made with caution and answer the question - "do we really need a new language in our stack to solve problem X?".

Fetching data from cerner's EMR / EHR

I don't have much idea in medical domain.
We evaluating a requirement from our client who is using Cerner EMR system
As per the requirement we need to expose the Cerner EMR or fetch some EMR / EHR data and to display it in SharePoint 2013 portal.
To meet this requirement what kind of integration options Cerner proposes. Is there any API’s or Web services exposed which can be used to build custom solutions for the same?
As far as I know Cerner did expose EMR / EHR information in HL7 format, but i don't have any idea how to access that.
I had also requested Cerner for the same awaiting replies from their end.
If anybody who have associated with similar kind of job can through some light and provide me with some insights.
You will need to request an interface between your organization and the facility with the EMR. An interface in the Health Care IT world is not the same as a GUI. Is is the mechanism (program/tool) that transfers HL7 data between one entity and the other. There will probably be a cost to have an interface setup. However, that is the traditional way Cerner communicates with 3rd parties. HIPAA laws will require that this connection be very secure.
You might also see if the facility with the EMR has an existing interface that produces the info you are after. You may be able to share that data or have a flat file generated from that interface that you could get access to. Because of HIPAA regulations, your client may be reluctant to share information in that manner.
I would suggest you start with your client's interface/integration team. They would be the ones that manage the information into and out of Cerner. They could also shed some light on how they prefer to see things done.
Good Luck
There are two ways of achieving this as I know. One is a direct connectivity to Cerner's Oracle database. This seems less likely to be possible as Cerner doesn't allow other vendors to have a direct access to their database.
The other way is to use Cerner's mPage Web Services. We have achieved this using mPage Web Services. The client needs to host the web services on a IBM WAS or some other container. We used WAS as that was readily available to us. Once the hosting is done, you will get a URL and using that you can execute any CCL program which will return you the data in JSON/XML format. mPage webservice has a basic HTTP authentication.
Now, CCL has to be written in a way which can return you the data you require.
We have a successful setup and have been working on this since 2014. For more details you can also try uCERN portal.
Thanks,
Navin

What is OpenCMIS Bridge?

I just noticed this project at Apache OpenCMIS:
https://svn.apache.org/repos/asf/chemistry/opencmis/trunk/chemistry-opencmis-bridge
There is no description, no documentation, and reading the code does not give many hints about what it is supposed to do.
Apache OpenCMIS sometimes releases great software silently, with little communication, so we might be missing another great piece of software here.
A Google Search for "OpenCMIS Bridge" returns only source code and the bare download page.
The OpenCMIS Bridge works like a proxy server. It accepts CMIS requests and forwards them to a CMIS server. On the way it can change the binding, and filter, enrich and federate data.
Here are few use cases:
If a repository does not support the CMIS 1.1 browser binding, you can put the OpenCMIS Bridge in front of it. The bridge then could talk JSON to the client and AtomPub to the server. The client wouldn't notice that the server doesn't support the browser binding.
Code can be added to the bridge to redact property values or filter whole objects when they are transferred through the bridge. That could add another level of security that the native repository doesn't support.
Code can also be added to add or enrich object data. For example, property values could be translated from cryptic codes into readable values. Virtual secondary types can be added on the fly. Or additional renditions could be provided.
The bridge can also be used to provide different views of multiple repositories. Repositories of different vendors can be access through one unified endpoint. It's possible to build one virtual repository across multiple backend repositories that then, for example, allows a federated query across all backends.
The OpenCMIS Bridge is only a framework, though. It just provides the infrastructure and the hooks to add your own code and rules.
If you are looking for a real world application, check SAP Document Center (formerly "SAP Mobile Documents"). It is based on the OpenCMIS Bridge.

Resources