relationship between FHIR and openEHR [closed] - modeling

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How do HL7 FHIR and openEHR relate?
I understand that HL7 v2 etc is basic messaging for interoperability.
But FHIR seems to add some Clinical Data Modeling to this in the form of resources - A Visit with a Patient with an Observation is to my mind a Clinical Model no?
And when you add in a FHIR server concept are we not verging on the CDR ?
So then openEHR models the same Clinical concept through Archetypes, aggregated within a Template. - fantastic ( this I think I get and see where it fits in openEHR )
Next - where is the cross over in interoperability?
Is openEHR designed to - provide Archetypes as direct map to the model on the screen?
My understanding is yes.( Datasource and UI interoperbility if you will )...
i.e. (In its simplest form) - Client calls Server - Server runs AQL on the data and returns XML result, client runs XSL over that to generate HTML -
But isnt FHIR more about interoperability and openEHR about data modelling? - so now are we suggesting an openEHR server serves the result as an openEHR standard - and we try Map it to FHIR resources and serve it to the front end or any interoperable system.
Should we be looking at picking one and forgetting the other?

FHIR models resources with the intent of data interchange.
openEHR defines a complete EHR platform architecture to manage clinical data structure definitions (archetypes, templates), including constraints and terminology/translations, manage clinical information (canonical information model), access clinical information (standard query language AQL), define rules for clinical decision support (standard rule language GDL), and defines a service model (REST API is close to be approved).
So, openEHR is all the internal stuff needed to allow interoperability (not just data exchange), FHIR is a service layer, that can be on top of an openEHR system, as other service layers can be like HL7 v2.x, IHE profiles, or even DICOM services.
In terms of FHIR over openEHR, mappings between openEHR archetypes and FHIR resources are needed to have a technical implementation. So you can have an openEHR CDR and access it via FHIR.
In terms of having a GUI over the openEHR system, from archetypes GUI can be automatically generated, and input data automatically validated using those archetypes used to generate the GUI. There are many implementations of this, some open source (I have many examples on my github repos).
Bottom line: you can create your EHR using openEHR, and provide an API or many APIs (custom, openEHR, FHIR, HL7 v2.x, XDS, ...).

Related

Implementation patterns for multiple programming languages in a single web application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I've only created web application with one programming language (like python or js).
I'm aware that multiple programming languages are used to create advanced services. But I don't how exactly does it works together, what are the different patterns to implement this.
Here's a scenario. If we have a Nodejs application that accepts like 100s of key-value pair data ( say JSON ) from a user and if we need to work on the data using Haskell... which are compiled to binary.
I have a hierarchy of data of say, a set of people and their managers along with some performance metrics and their points. And I want to pass them to a program written in Haskell to compute some values based on their role etc...
What methods could be used to pass the data into the program?
should I be running a server that accepts the values as JSON (via HTTP) and parses them inside Haskell?
or can I link them with my nodejs application in some other way ? in that case, how can I pass the data from nodejs application to Haskell?
My concern is also about the latency, It's a real-time computation that would happen every time requested.
For instance, facebook uses Haskell for spam filtering and an engineer states they use c++ and Haskell in that service.. c++ accepts input passes to Haskell, which returns back with info. how might be the interfacing working here?
What are the methods used to pass the data into the program ? Should the binary services be Daemon ?
The exact approach depends on the exact requirement in hand, software components planned for usage.
If you are looking for interworking between different languages, there are various ways.
The method based on Addons(dynamically-linked shared objects written in C++) provides an interface between JavaScript and C/C++ libraries. The Foreign Function Interface (FFI) and Dynamic Libraries (.dylib) allow a function written in another language(rust) to be called from language in host(node.js) language. This shall rely on the require() function that shall load Addon as ordinary Node.js modules.
For example, the node-ffi addon can be used to create bindings to native libraries without writing any C++ code for loading and calling dynamic libraries using pure JavaScript. The FFI based approach is used for dynamically loading and calling exported Go functions as well.
In case if you would like to call the Go functions from python, then you can use the ctypes foreign function library for calling the the exported Go functions
If you are looking for design pattern for a architecture that accommodates modules, services built out of various languages, it depends on your exact application & performance requirement.
In general, if you would like to develop a loosely coupled solution taking advantage of emerging technologies (various language, frameworks), then microservices based architecture can be more beneficial. This shall bring in more independency as a change in a module/service shall not impact other services drastically. If your application is large/complex, then you may need to go with microservices pattern like , "Decompose by business capability" or "Decompose by subdomain". There are many patterns related to the microservices pattern like "Database per Service" pattern where each service shall have own database based on your requirement, "API gateway" pattern that is based on how services are accessed by the clients ("Client-side Discovery pattern" or "Server-side Discovery pattern") and other related variants of microservices are available which you can deploy based on your requirement.
The approach in-turn also shall be based on the the messaging mechanism (synchronous / asynchronous), message formats between microservices as per the solution requirement.
For a near perfect design, you may need to do some prototyping and performance test / load test / profiling on your components both software & hardware with the chosen approach and check if the various system requirements / performance metrics are met and decide accordingly.
Use Microservices Architecture.
Microservice Architecture is an architecture where the application itself is divided into various components, with each component serving a particular purpose. Now, these components are called Microservices collectively. The components are no longer dependent on the application itself. Each of these components is literally and physically independent. Because of this awesome separation, you can have dedicated Databases for each component, aka Microservices as well as deploy them to separate Hosts / Servers and moreover, having a specific programming language for each microservice.

Decision path for Azure Service Fabric Programming Models

Background
We are looking at porting a 'monolithic' 3 tier Web app to a microservices architecture. The web app displays listings to a consumer (think Craiglist).
The backend consists of a REST API that calls into a SQL DB and returns JSON for a SPA app to build a UI (there's also a mobile app). Data is written to the SQL DB via background services (ftp + worker roles). There's also some pages that allow writes by the user.
Information required:
I'm trying to figure out how (if at all), Azure Service Fabric would be a good fit for a microservices architecture in my scenario. I know the pros/cons of microservices vs monolith, but i'm trying to figure out the application of various microservice programming models to our current architecture.
Questions
Is Azure Service Fabric a good fit for this? If not, other recommendations? Currently i'm leaning towards a bunch of OWIN-based .NET web sites, split up by area/service, each hosted on their own machine and tied together by an API gateway.
Which Service Fabric programming model would i go for? Stateless services with their own backing DB? I can't see how Stateful or Actor model would help here.
If i went with Stateful services/Actor, how would i go about updating data as part of a maintenance/ad-hoc admin request? Traditionally we would simply login to the DB and update the data, and the API would return the new data - but if it's persisted in-memory/across nodes in a cluster, how would we update it? Would i have to expose this all via methods on the service? Similarly, how would I import my existing SQL data into a stateful service?
For Stateful services/actor model, how can I 'see' the data visually, with an object Explorer/UI. Our data is our Gold, and I'm concerned of the lack of control/visibility of it in the reliable services models
Basically, is there some documentation on the decision path towards which programming model to go for? I could model a "listing" as an Actor, and have millions of those - sure, but i could also have a Stateful service that stores the listing locally, and i could also have a Stateless service that fetches it from the DB. How does one decide as to which is the best approach, for a given use case?
Thanks.
What is it about your current setup that isn't meeting your requirements? What do you hope to gain from a more complex architecture?
Microservices aren't a magic bullet. You mainly get four benefits:
You can scale and distribute pieces of your overall system independently. Service Fabric has very sophisticated tools and advanced capabilities for this.
You can deploy and upgrade pieces of your overall system independently. Service Fabric again has advanced capabilities for this.
You can have a polyglot system - each service can be written in a different language/platform.
You can use conflicting dependencies - each service can have its own set of dependencies, like different framework versions.
All of this comes at a cost and introduces complexity and new ways your system can fail. For example: your fast, compile-time checked in-proc method calls now become slow (by comparison to an in-proc function call) failure-prone network calls. And these are not specific to Service Fabric, btw, this is just what happens you go from in-proc method calls to cross-machine I/O - doesn't matter what platform you use. The decision path here is a pro/con list specific to your application and your requirements.
To answer your Service Fabric questions specifically:
Which programming model do you go for? Start with stateless services with ASP.NET Core. It's going to be the simplest translation of your current architecture that doesn't require mucking around with your data layer.
Stateful has a lot of great uses, but it's not necessarily a replacement for your RDBMS. A good place to start is hot data that can be stored in simple key-value pairs, is accessed frequently and needs to be low-latency (you get local reads!), and doesn't need to be datamined. Some examples include user session state, cache data, a "snapshot" of the most recent items in a data stream (like the most recent stock quote in a stream of stock quotes).
Currently the only way to see or query your data is programmatically directly against the Reliable Collection APIs. There is no viewer or "management studio" tool. You have to write (and secure) an API in each service that can display and query data.
Finally, the actor model is a very niche model. It serves specific purposes but if you just treat it as a data store it will not work for you. Like in your example, a listing per actor probably wouldn't work because you can't query across that list, or even have multiple users reading the same listing simultaneously.

Using IoT platform vs normal web application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
There are a lot of IoT platform in the market like AWS Amazon IoT and Microsoft Azure Hub, I understand all the features that are offered by those platforms.
Questions:
Couldn't I implement all those features on a normal web application which can handle communication and all these features and run this application on a cluster of unmanaged server and have the same result?
When shall I use a normal web application and when shall I use IoT platform?
Of course you can implement your own IoT hub on any web application and cloud (or on-prem) platform, there is nothing secret or proprietary in those solutions. The question is, do you want to do that? What they offer is a lot of built in functionality that would take you some serious time to get production ready when building it yourself.
So:
1) yes, you can build it. Let's compare it to Azure IoT hub and look at what that contains:
a) reliable messages to and from hub
b) periodic health pulses
c) connected device inventory and device provisioning
d) support for multiple protocols (eg HTTP, AMQP, MQTT...)
e) access control and security using tokens
.... and more. Not supposed to be a full feature list here, just to illustrate that these solutions contains a whole lot of functionality, which you may (or may not) need when building your own IoT solution.
2) when does it make sense to build this yourself? I would say when you have a solution where you don't really neeed all of that functionality or can easily build or setup those parts you need yourself. Building all of that functionality doesn't, generally speaking, make sense, unless you are building your own IoT platform.
Another aspect is the ability to scale and offer a solution for multiple geographic locations. A web application on a cloud provider could easily be setup to both autoscale and cover multiple regions, but it is still something you would have to setup and manage yourself. It would likely also be more expensive to provide the same performance as the platform services does, they are built for millions of devices across a large number of customers, their solution will likely look different under the hood.
Third is time-to-market, by going with a platform service will get you up and running with your IoT solution fairly quick as opposed to building it yourself.
Figure out what requirements you want to support, how you want to scale, how many devices and so on. Then you can do a simple comparison of price and also what it would cost you to build the features you need.

How to evaluate a web service framework [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am trying to evaluate different web service frameworks for API development in .Net. So far the frameworks I've been looking at are:
ServiceStack
MVC
Web API
NancyFx
I'm trying to find some common talking-points between the frameworks so I know what to look for when picking a framework. The talking points I've got so far are:
The Framework beliefs and principles
The Architecture of the framework (Client and Service side)
The Stack the framework provides you with
The Ease of development within the stack (plugins etc)
End-to-end performance benchmarks
Scalability benchmarks
Framework documentation availability
Framework Support (Cross platform etc)
Pricing
Overall Conclusion
Can anyone think of anything else I should think about? By the end of the research I'm hoping to write about each framework in detail and to make comparisons as to which framework to chose for a given purpose. Any help would be greatly appreciated.
End to End Productivity - The core essence for a Service is to provide a Service that ultimately delivers some value to its consumers. Therefore the end-to-end productivity of consuming services should also be strongly considered as the ease of which Services can be consumed from clients and least effort to consume them, ultimately provides more value to clients which is often more valuable than the productivity of developing Services themselves since the value is multiplied across its multiple consumers. As many services constantly evolve, the development workflow of updating Services and how easy it is to determine what's changed (i.e. if they have a static API) also impacts productivity on the client.
Interoperability - Another goal of a Service is interoperability and how well Services can be consumed from heterogeneous environments, most Web Service Frameworks just do HTTP however in many cases in Intranet environments sending API requests via a MQ is more appropriate as it provides greater resilience than HTTP, time-decoupling, natural load-balancing, decoupled endpoints, improved messaging workflows and error recovery, etc. There are also many Enterprises (and Enterprise products) that still only support or mandate SOAP so having SOAP endpoints and supporting XSD/WSDL metadata can also be valuable.
Versionability - Some API designs are naturally better suited to versioning where evolving Services can be enhanced defensively without breaking existing Service Consumers.
Testability and Mockability - You'll also want to compare the ease of which Services can be tested and mocked to determine how easy it is to create integration tests and whether it requires new knowledge and infrastructure as well as how easy it supports parallel client development which is important when front and backend teams develop solutions in parallel where the API contracts of a Service can be designed and agreed upon prior to development to ensure it meets the necessary requirements before implementation, then the front and backend teams can implement them independently of each other. If the Services haven't been implemented the clients would need to "mock" the Service responses until they have, then later switch to use the real services once they've been implemented.
Learnability how intuitive it is to develop Services, the amount of cognitive and conceptual overhead required also affects productivity and the ability to reason about how a Service Framework works and what it does has an impact on your Solutions overall complexity and your teams ability to make informed implementation decisions that affect performance and scalability and the effort it takes to ramp up new developers to learn your solution.

When to use domain driven development and database driven development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Can anybody have good answer when should be database driven development be used and when should domain driven development be used. These both development approach have their importance in their respected areas. But I am not so clear which approach is appropriate in what type of situation. Any recommendation?
First for some background, Martin Fowler actually described three different "patterns" in his book Patterns of Enterprise Arcitecture. Transaction Script, Active Record and Domain Model. DDD uses the domain model pattern for the overall architecture and describes a lot of practices and patterns to implement and design this model.
Transaction script is an architecture where you don't have any layering. The same piece of code reads/writes the database, processes the data and handles the user interface.
Active Record is one step up from that. You split off your UI, your business logic and data layer still live together in active record objects that are modeled after the database.
A domain model decouples the business logic that lives in your model from your data-layer. The model knows nothing about the database.
And now we come to the interesting part:
The cost of this added separation is of course extra work. The benefits are better maintainability and flexibility.
Transaction script is good when you have few or no business rules, you just want to do data-entry and have no verification steps or all the verification is implemented in the database.
Active record adds some flexibility to that. Because you decouple your UI you can for example reuse the layer beneath it between applications, you can easilly add some business rules and verification logic to the business objects. But because these are still tightly coupled to the database changes in the datamodel can be very expensive.
You use a domain model when you want to decouple your business logic from the database. This enables you to handle changing requirements easier. Domain Driven Design is a method to optimally use this added flexibility to implement complex solutions without being tied to a database implementation.
Lots of tooling makes data-driven solutions easier. In the microsoft space it is very easy to visually design websites where all the code lives right behind the web-page. This is a typical transaction script solution and this is great to easilly create simple applications. Ruby on rails has tools that make working with active record objects easier. This might be a reason to go data-driven when you need to develop simpler solutions. For applications where behaviour is more important than data and it's hard to define all the behaviour up front DDD is the way to go.
I've asked a similar question: Where do I start designing when using O/R mapping? Objects or database tables?
From the answers I got I would say: Unless you have concrete reason to use database driven development, use domain driven development.
Think of it this way.
The problem domain exists forever. Your class definitions will reflect the eternal features of the domain.
The relational database is today's preferred persistence mechanism. At some point, we'll move past this to something "newer", "better", "different". The database design is merely one implementation; it reflects a solution architecture more than the problem domain.
Consequently, it's domain first. Classes reflect the problem domain and the universal truths. Relational database and ORM come second and third. Finally, fill in other stuff around the model.
As a side-note to mendelt's post, I feel there is a fourth pattern: one that is layered, separates busines logic from persistence and storage, yet uses no "entities", or "busines objects". A half way point, if you will, between Transaction/Action script and DDD.
In a good deal of the systems I've worked on, the persistence layer (repositories) used SqlClient directly and returned datasets to a calling service. The services performed decisions and compiled views which were sent to the user, through the controller. You migth consider the service layer a business model, and you'd be right, but it wasn't a "domain" model in the DDD sense. Still, ALL business logic occured in that one layer, period. Each layer had it's job. The views displayed data, the controllers determined views, the persistence layer handled storage, and the services worked in-between controllers and persistence.
The point is this: DDD is an approach to defining a business through Ul, tests, and code. It is not about entities, value objects and aggregates. Those things are just by-products of the OOP purists approach to DDD.
Just more thoughts for your consideration.
For complex business models, I prefer a mix of ActiveRecord and DDD. The domain objects know how to save themselves and data actions are done against a repository (nHibernate can act as a generic repository, if you look at a repository as something that exposes data to the model as a collection). The business logic resides in the domain entities, and even some encapsulation of value types can be accomplished, although only when there is a business need. Some implementations of DDD favor removing all public setters and only modifying entities through methods. I'm not a fan of that implementation unless there is a very good business need.
It seems to me that this implementation gives you the ease of use of ActiveRecord and the business logic encapsulation of DDD.
Domain Driven Development is surely the way to go. it makes more sense and adds flexibility.

Resources