Transaction Manger and JDBC template to dynamically route to multiple data sources(Physically different amazon RDS connections) on runtime - transactionmanager

Transaction Manger and JDBC template to dynamically route to multiple data sources(Physically different amazon RDS connections) on runtime
We went through multiple examples on the internet and most forums refered to Springs abstractroutingdatasource implementation, but the challenge we are facing is to identify unknown databases on runtime. The business requirement is such that we dont have the opportunity to change the appliaction properties file by adding this new/unknown database and redeploying the code again.
For that we had gone ahead with the usage of Spring's DefaultTransactionDefination - pragrammatic implmenation rather than XML implementation.
Is there a way to implement an XML implementation of the same for an unknown/new datasourse without changing the application properties or redeploying it.

Related

JOOQ how generate record without connection to database

What does your approach to generating Records during compilation time look like without connection to the database? I use to maven plugin for that but I still need a connection to the database but I don't have one.
jOOQ offers 4 out of the box solutions to generating code without a connection to a live database, including:
JPADatabase if your meta data source of truth is encoded in JPA annotations
XMLDatabase for XML based meta data
DDLDatabase for DDL script based meta data (e.g. Flyway migrations)
LiquibaseDatabase for liquibase migration based meta data
All of the above meta data sources come with their own set of limitations, including lack of support for some vendor specific stuff, but that might not affect you. In simple cases, especially the DDLDatabase can be really useful to achieve quicker turnarounds when generating code.
If vendor specific functionality is a thing in your application, then the official recommendation is to use testcontainers to set up your schema for jOOQ code generation (and integration testing!).

Yahoo Vespa Create a search definition in runtime

I would like to know if there is any API in "vespa platform" which I can use to create a search definition (sd) in runtime.
This is a requirement, because the documents that I will index are depending on the user input in my front end application.
No, there is no such API available. The idea of deploying an immutable application package (including the SD) is a conscious design choice to ensure appropriate management of multiple search clusters in multiple locations over time as well as enabling source control management.
If needed, one could build what you describe "on top" of Vespa: A web service that will let you mutate an existing SD and, upon submit, create the updated application package and deploy to your Vespa cluster. Vespa will (in most cases) handle schema changes without impacting serving.

How does autoscaling work with an Azure web application?

I'm curious how auto scaling works with an Azure web application, specifically how each new instance gets a copy of the source code for the application.
Is this what happens, or is the source stored somewhere and each new instance points to it?
The entire virtual machine is duplicated. So usually you might have just one database but multiple apps receiving and processing the requests. If you need an "autoscaling" database too, then there are database solutions that handle syncronization across multiple machines, but in that case you're probably better off using Azure's native database which takes care of that.

Fetching data from cerner's EMR / EHR

I don't have much idea in medical domain.
We evaluating a requirement from our client who is using Cerner EMR system
As per the requirement we need to expose the Cerner EMR or fetch some EMR / EHR data and to display it in SharePoint 2013 portal.
To meet this requirement what kind of integration options Cerner proposes. Is there any API’s or Web services exposed which can be used to build custom solutions for the same?
As far as I know Cerner did expose EMR / EHR information in HL7 format, but i don't have any idea how to access that.
I had also requested Cerner for the same awaiting replies from their end.
If anybody who have associated with similar kind of job can through some light and provide me with some insights.
You will need to request an interface between your organization and the facility with the EMR. An interface in the Health Care IT world is not the same as a GUI. Is is the mechanism (program/tool) that transfers HL7 data between one entity and the other. There will probably be a cost to have an interface setup. However, that is the traditional way Cerner communicates with 3rd parties. HIPAA laws will require that this connection be very secure.
You might also see if the facility with the EMR has an existing interface that produces the info you are after. You may be able to share that data or have a flat file generated from that interface that you could get access to. Because of HIPAA regulations, your client may be reluctant to share information in that manner.
I would suggest you start with your client's interface/integration team. They would be the ones that manage the information into and out of Cerner. They could also shed some light on how they prefer to see things done.
Good Luck
There are two ways of achieving this as I know. One is a direct connectivity to Cerner's Oracle database. This seems less likely to be possible as Cerner doesn't allow other vendors to have a direct access to their database.
The other way is to use Cerner's mPage Web Services. We have achieved this using mPage Web Services. The client needs to host the web services on a IBM WAS or some other container. We used WAS as that was readily available to us. Once the hosting is done, you will get a URL and using that you can execute any CCL program which will return you the data in JSON/XML format. mPage webservice has a basic HTTP authentication.
Now, CCL has to be written in a way which can return you the data you require.
We have a successful setup and have been working on this since 2014. For more details you can also try uCERN portal.
Thanks,
Navin

Single Shared Database, Fluent NHibernate, Many clients

I am working on inventory application (C# .net 4.0) that will simultaneously inventory dozens of workstations and write the results to a central database. To save me having to write a DAL I am thinking of using Fluent NHibernate which I have never used before.
It is safe and good practice to allow the inventory application which runs as a standalone application to talk directly to the database using Nhibernate? Or should I be using a client server model where all access to the database is via a server which then reads/writes to database. In other words if 50 workstations when currently being inventoried there would be 50 active DB sessions. I am thinking of using GUID-Comb for the PK ID's.
Depending on the environment in which your application will be deployed, you should also consider that direct database connections to a central server might not always be allowed for security reasons.
Creating a simple REST Service with WCF (using WebServiceHost) and simply POST'ing or PUT'ing your inventory data (using HttpClient) might provide a good alternative.
As a result, clients can get very simple and can be written for other systems easily (linux? android?) and the server has full control over how and where data is stored.
it depends ;)
NHibernate has optimistic concurrency control ootb which is good enough for many situations. So if you just create data on 50 different stations there should be no problem. If creating data on one station depends on data from all stations it gets tricky and a central server would help.

Resources