I'm working on a server query engine that will return FHIR resources and I've run into a problem.
I can successfully receive an Get and Search queries that use simple parameters (like Composition/4 or Patient?name=smith) but I can't get it to recognize more complex, and useful, parameters like Composition?subject:Patient=4 or type=[system]|[value].
How are these types of parameters passed and what should I be looking for on the server?
if you look at the bottom of the Composition page, you'll see this has a search param called "type", which is of type "token". As you already found out yourself, this has the form
[system]|[value]
where the system is a full url. Some common systems in use can be found here: http://hl7.org/fhir/2015May/terminologies-systems.html
In this case, you should be using type=http://loinc.org|60591-5
The formats of all more complex search params can be found on the search documentation page (http://hl7.org/fhir/2015May/search.html#2.1.1.1).
If you need an example of how to implement search for this, take a look at the open-source .NET implementation called Spark: https://github.com/furore-fhir/spark
Related
We have many different documentation sites and I would like to search a keyword across all of these sites. How can I do that?
I already thought about implementing a simple web scraper, but this seems like a very ugly solution.
An alternative may be to use Elasticsearch and somehow point it to the different doc repos.
Are there better suggestions?
Algolia is the absolute best solution that I can think of. There's also Typesense and Meilisearch of course.
Algolia is meant specifically for situations like yours, so it even comes with a crawler.
https://www.algolia.com/products/search-and-discovery/crawler/
https://www.algolia.com/
https://typesense.org/
https://www.meilisearch.com/
Here's a fun page comparing them (probably a little biased in Typesense's favor)
https://typesense.org/typesense-vs-algolia-vs-elasticsearch-vs-meilisearch/
Here are some example sites that use Algolia Search
https://developers.cloudflare.com/
https://getbootstrap.com/docs/5.1/getting-started/introduction/
https://reactjs.org/
https://hn.algolia.com/
If you personally are just trying to search for a keyword, as long as they're indexed by Google, you can always search with the format site:{domain} "keyword"
You can checkout Meilisearch for your use case. Meilisearch is a Rust based and open sourced search engine.
Meilisearch comes with a document scraper tool ( https://github.com/meilisearch/docs-scraper ) that can scrape content and then also index it.
While using it you need to define what exact content you are searching for in the configuration file for the scraper tool. And then you can run the tool using Docker.
I have this requirement:
We have a journalarticle and we wish to have sections which have content for internal and external users for the application.
We are able to hide the content from rendering by implementing custom template on web content display and using a simple custom-field for a user which helps us to classify it.
Having said that when we search something as an external user, the search portlet is able to fetch an article where the search text is a part of internal user content, and due to the above mentioned template the content is not visible.
In short, from the user's perspective the resultant article does not match the searched term.
I wish to seek some pointer to check whether there is a mechanism to ensure that when an external user searches something then we only search the dynamic-element of the doc which matches the user type?
We have thousands of such articles and create multiple copy of the same article does not seems viable solution.. so any pointers would be a great help.
Liferay version : 6.2 GA4 CE
Thanks!
AJ
First of all: Not finding a search term in a document can be a sign of good working synonym resolution in the search engine. It's questionable if this behaviour is always wrong or only in this particular case. Remember google bombs?
That being said, I believe that this architecture of half-visible documents is flawed from the beginning. Ideally I'd suggest to change it, for example by splitting the information to two articles, so that you can use the standard permissions to resolve. If you link both, you can determine how/which article or template to use. It's not an ideal solution, but might be a workaround.
Another workaround might be to change Liferay's indexer component and index two different versions of the article, with two different permissions. Of course, you'll have to change the search side as well, so that you'll find each article at most once, even if it's now twice in the search engine.
Again - not ideal, but might be the quickest fix that you can get right now without changing the underlying architecture. However, to change the underlying architecture is my actual recommendation.
I'm working on a simple search engine to let users filter professional profiles based on some criteria.
Let's say I'm looking for a profile able to speak two languages, italian(1) and spanish(2): a GET request could look like ...&languages=1,2&....
But, let's say I'm looking for a profile able to speak italian(1) very good(10) and spanish(2) quite good(9).
How to structure a GET request for this instance?
Easy as ...&languages[1]=10&languages[2]=9&...
Which table in Liferay stores the predefined values given for a structure.
Also is there a facility in Liferay to populate these values dynamically using webservices?
The API used to be JournalStructureService, however, as the documentation states, this has been replaced with the Dynamic Data Display API, which, for example, you can find under DDMStructureService in version 6.2.
This gives you a hint where to find the underlying data, however, you don't want to manually write to the database. You do want to use the API to change values. Trust me. Consider the database to be an implementation detail and leave it alone - if nothing else to make your next upgrade experience easier. You should never change any values in the database manually without knowing exactly what you're doing. And, trust me, the keyword here is "exactly", and you'll fail to know all the possible side effects. Don't touch it.
As said #Olaf, depends on Liferay version you will need to use the JournalStructureService or the DDMStructureService. So, if you want to use the Liferay Service by web api you have two options the Axis api where you can obtain WSDL (domain:port/api/axis) or you can use the Json api (domain:port/api/axis). In many cases you are going to need a token to use this services.
We need to create search input field like it is on _http://maps.google.com
The key functionality is suggest list with appropriate results. We
have not found this feature in API.
Analyzing maps.google.com we see that suggest list is received
from get request to this url
https://maps-api-ssl.google.com/maps/suggest?q=%D0%BC%D0%BE%D1%81&cp=...
There are many parameters, including data from search field. This get
request returns our suggest list.
Is there a possibility to use this url in our needs with our data. Or
how can we make it in some other way.
Similar to our needs: _http://cdn.michaelhart.me/mh/instant/maps/
check this out:
http://tech.cibul.net/geocode-with-google-maps-api-v3/
Theoretically you shouldn't use maps-api-ssl.google.com/maps/suggest as it might not be legal. I found this quote from google employee:
'Endpoints like this that are used by Google Maps but not documented as
part of the Maps API should be considered private interfaces.
Consequently use of those end points is a breach of the Terms of
Service. In addition any existing API credentials you may have are
completely unrelated to these end points because they are not served
by API infrastructure'