Looking for ICD10 API [closed] - medical

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Any body knows of a good ICD10 API to do diagnostic code lookups that can recommend. I am currently building a simple app to tag patients with medical condition and the idea is to have a lookup API where one can type asthma for example and get back all the different ICD10 codes for asthma

My R package, icd converts ICD-9 and ICD-10 codes to descriptions, in addition to its main function of finding comorbidities. Documentation at https://jackwasey.github.io/icd/ , and code at https://github.com/jackwasey/icd . It does this using the function explain_code. It currently uses ICD-10-CM, i.e. the USA billing adapted ICD-10 code set, which in general is more specific than the canonical WHO version, but does have some areas of less detail.
E.g. WHO ICD-10 has HIV disease resulting in Pneumocystis jirovecii pneumonia as a subdivision for HIV infection, whereas ICD-10-CM just has HIV. On the other hand, ICD-10-CM has Sucked into jet engine, subsequent encounter whereas the WHO is happy with the terribly vague: Person on ground injured in air transport accident.
The volume of data for all the descriptions is not very high, just handful of megabytes, so although an API may seem convenient, you might consider just having all the data and not having to ping some random server.

I'm going to assume you're ignoring all of the usual stuff around variations of spelling of medical terms, proper terms vs. colloquialisms, labels vs. descriptions, etc. that get to be a pain with term / code finders.
If you want to use a hosted option and are OK with the terms of use, you could use UMLS (https://uts.nlm.nih.gov/home.html#apidocumentation). It's a great resource, but the use case you're describing isn't necessarily what it's intended to address.
Personally - and I usually don't like to roll my own stuff - I'd consider doing your own thing. You could do something focused on your needs and tailor it to any specific behaviors you might want (like preferring specific codes based on an organization - EX: billing preference). You could also probably make it far, far more ... perky ... and address short forms of terms (EX: synonyms like "DVT") or misspellings ("asthma" vs. "athsma"). If you go that route, I'd suggest considering getting your hands on the ICD-10 code info and then mashing it into Elastic Search. You could extend the data by mixing it with other info and really make it hum. And Elastic is wicked fast.
That's just my $0.02, though.

There is a project called "Unified Medical Language System (UMLS)", funded by NIH and apparently they are working on a RESTful Web API for medical terms.
https://documentation.uts.nlm.nih.gov/rest/home.html
I didn't work with their API yest and the samples I am seeing on their website sounds like they are more SNOMED-CT oriented.
The option I would go for is to get the whole ICD-10-CM from CMS and build my own Web API.
https://www.cms.gov/Medicare/Coding/ICD10/2016-ICD-10-CM-and-GEMs.html

you can check the full documentation from WHO https://icd.who.int/icdapi

Related

Approximate string matching algorithms state-of-the-art [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I seek a state of the art algorithms to approximate string matching.
Do you offer me references(article, thesis,...)?
thank you
You might have got your answer already but I want to convey my points on approximate string matching so that, others might benefit. I am speaking from my experience having worked to solve cloud service problems to handle really large scale requirements.
If we just want to talk about the Approximate String matching algorithms, then there are many.
Few of them are:
Jaro-Winkler, Edit distance(Levenshtein), Jaccard similarity, Soundex/Phonetics based algorithms etc.
A simple googling would give us all the details.
Irony is, they work while you try to match two given input strings. Alright theoretically and to demonstrate the way fuzzy or approximate string matching works.
However, grossly understated point is, how do we use the same in production settings. Not everybody that I know of who were scouting for an approximate string matching algorithm knew how they could solve the same in the production environment.
Assuming that we have a list of millions of names and if we want to search a given input name against all the entries in the list using one of the standard algorithms above would mean disaster.
A typical, edit distance algorithm has a time complexity of O(N^2) where N is the number of characters in a string. To scan the list of size M, the complexity would be O(M * N^2). This would mean very high hardware requirements and it just doesn't work in your favor regardless of how much h/w you want to stack up.
This is where we have to start thinking about other approaches.
One of the common approaches to solve such a problem in the production environment is to use a standard search engine like -
Apache Lucene.
https://lucene.apache.org/
Lucene indexing engine indexes the reference data(Called as documents) and input query can be fired against the engine. The results are returned which are ranked based on how close they are to the input.
This is close to how google search engine works. Googles crawles and index the whole web but you should have a miniature system mimicking what Google does.
This works for most of the cases including complicated name matching where the first, middle and the last names are interchanged.
You can select your results based on the scores emitted by Lucene.
While you mature in your role, you will start thinking about using hosted solutions like Amazon Cloudsearch which wraps the Solr and ElastiSearch for you. Of-course it uses Lucene underneath and keeps you independent of potential size of the index due to larger reference data which is used for indexing.
http://aws.amazon.com/cloudsearch/
You might want to read about Levenshtein distance.
http://en.wikipedia.org/wiki/Levenshtein_distance

Difference between a search engine's relevance rankings and a recommender system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
What's the difference between a search engine's relevance rankings and a recommender system?
Don't both try and achieve the same purpose, i.e. finding the most relevant items for the user?
There is a major difference between a search engine and a recommender system:
In a search engine, the user knows what he is looking for, and he makes the query ! For instance, I might wonder if I should go to see a movie, and search information about it, like actors and directors.
In a recommender system, the user isn't supposed to know what we are recommending to her. We match her tastes with neighbours or whatever algorithm you like, and find things that she would't have looked after, like a new movie!
One is more about information retrieval, while the other is more about information filtering and discovery.
No, there are at two differents levels of analysis.
A search engine, look into a collection of data to get data that match a query. Even if all result are identicals or the results doesn't change from day to day. Very like a special form of databases.
The recommender system, use informations about you to provide specific improved content about the searched data. Very like a servant that know you well and use search engine for you.
Beward, some tool that starts as web-search engine are now more like recommender system.

User stories for functionality that cross-cuts multiple presentation modes? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What's a good way to capture user stories when you have features that are common across multiple UI modes?
For example, imagine a commercial flight information system, something someone might use to answer the question "When is flight UA211 expected to land?"
As is often the case, the feature of providing schedule information is common underlying functionality, even though you might ask for it via a desktop web browser, a mobile browser (where you want to apply different style to make it more usable), and maybe even via SMS shortcodes.
Now, that certainly could be a single user story ("As someone meeting a traveller, I want to see flight arrival information so that I can be at the airport on time"). But that seems wrong (and would probably be an epic story, anyway).
You can make it separate user stories ("As a desktop user...", "As a smartphone user...", etc), which I've done in the past, and the team just knows to estimate the first one to include all of functionality, and the subsequent ones to estimate only UI implementation.
A third option is to make the underlying functionality a story isolated from the presentation layer, and then have UI stories: "As a flight info system front end, I want to get flight status information so that I can present it to the user", "As a desktop user, I want to see flight arrival... etc". But that seems artificial.
Thoughts?
dwh
I think the problem is that you are trying to tie the UI functionality to the backend too tightly.
For example, if you break it into a simple story:
A user may want to know the flight status given the flight number.
OK, now, given that you implement that, now you can look at which platforms will be calling this, as, one part of agile is not to over-develop, but in this case, if you have a business need to support mobile and desktop devices, then you should look at implementing this as a REST service, since that is the simplest solution for both to work with.
So the REST service solves the first story above.
Now, you will find that there are other specifics for each platform. For example, is there something on the phone that may already have the information, for example, did the traveller go to a trip site and already enter his info, then you may want to go there, assuming that the traveller is in the users contacts.
Or, if the user is just going to enter a flight number and that is it, then why not just do it as a webpage, as that is the simplest approach that supports both concepts. Then, if you have a url that supports GET, and outputs as HTML then you can easily display.
So, my first story was too simple, you may want to consider whether it is possible to return different types of data, so a user may want to have HTML, PDF, json or xml, but for each of these there should be a business need.
Unfortunately it is hard to answer your question as there are too many unknowns, which is why you are having a difficult time. If you ask the wrong question then you do have an epic, but if you can just break it down to a few simple stories then it becomes much easier to solve.
I would recommend the second option.
As you suggested, the first sounds like too much for a single story, and a story should always fit into a single iteration.
With the third option, the big problem is that you aren't delivering business value at the end of the story, which is generally a bad practice.
There are other ways you could split this work though. You could initially develop a very cut-down, barebones version which would work across all clients, and then refine each of them in subsequent stories.

Building a code asset library [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been thinking about setting up some sort of library for all our internally developed software at my organisation. I would like collect any ideas the good SO folk may have on this topic.
I figure, what is the point in instilling into developers the benefits of writing reusable code, if on the next project the first thing developers do is file -> new due to a lack of knowledge of what code is already out there to be reused.
As an added benefit, I think that just by having a library like this would encourage developers to think more in terms of reusability when writing code
I would like to keep this library as simple as possible, perhaps my only two requirements being:
Search facility
Usable for many types of components: assemblies, web services, etc
I see the basic information required on each asset/component to be:
Name & version
Description / purpose
Dependencies
Would you record any more information?
What would be the best platform for this i.e., wiki, forum, etc?
What would make a software library like this successful vs unsuccessful?
All ideas are greatly appreciated.
Thanks
Edit:
Found these similar questions after posting:
How do you ensure code is reused correctly?
How do you foster the use of shared components in your organization?
Sounds like there is no central repository of code available at your organization. Depending on what you do this could be because of compatmentalization of the knowledge due to security restrictions, the fact that external vendor code is included in some/all of the solutions, or your company has not yet seen the benefits of getting people to reuse, refactor, and evangelize the benefits of such a repository.
The common attributes of solutions I have seen work at mutiple corporations are a multi pronged approach.
Buy in at some level from the management. Usually it's a CTO/CIO that the idea resonates with and they claim it's a good thing and don't give any money to fund it but they won't sand in your way if they are aware that someone is going to champion the idea before they start soliciting code and consolidating it somewhere.
Some list of projects and the collateral available in english. Seen this on wikis, on sharepoint lists, in text files within a source repository. All of them share the common attribute of some sort of front end search server that allows full text over the description of a solution.
Some common share or repository for the binaries and / or code. Oftentimes a large org has different authentication/authorization methods for many different environments and it might not be practical (or possible logistically) to share a single soure repository - don't get hung up on that aspect - just try to get it to the point that there is a well known share/directory/repository that works for your org.
Always make sure there is someone listed as a contact - no one ever takes code and runs it in production without at lest talking to the previous owner of it - and if you don't have a person they can start asking questions of right away then they might just go ahead and hit file->new.
Unsuccessful attributes I've seen?
N submissions per engineer per time period = lots of crap starts making it's way in
No method of rating / feedback. If there is no means to favorite/rate/give some indicator that allows the cream to rise to the top you don't go back to search it often because you weren't able to benefit from everyone else's slogging through the code that wasn't really very good.
Lack of feedback/email link that contacts the author with questions directly into their email.
lack of ability to categorize organically. Every time there is some super rigid hierarchy or category list that was predetermined everything ends up in "other". If you use tags or similar you can avoid it.
Requirement of some design document to accompany it that is of a rigid format the code isn't accepted - no one can ever agree on the "centralized" format of a design doc and no one ever submits when this is required.
Just my thinking.

How does one find prices from Amazon's site programmatically? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
So Amazon has lots of different APIs for different things, and it's hard to find the one I'm looking for.
I have a client that sells things and checks Amazon's lowest price to know where to price their things (slightly under the lowest thing there). They want functionality integrated into their inventory system that would automatically find the product's lowest price on Amazon and display that. I was wondering which AWS service is best suited to this task.
I see the Product Advertising API, and that looks like the closest thing right now. Is that so?
I don't really want to rely on a scraper when Amazon provides a programmatic interface to this information somewhere, which I know they do because many other products have this. Some say that they can just download a dump of Amazon's products and use that locally -- I'm open to that option too if anyone can point me in its direction.
Yes, the technically appropriate API is the Product Advertising API, using the ItemLookup/ItemSearch operations or the Seller* operations.
https://affiliate-program.amazon.com/gp/advertising/api/detail/main.html
I would also advise you to check the licensing agreement for this API, notably clause 4 (i).
You can use the Amazon Marketplace Web Service (api, description)
This service can group all of the available offers into ‘buckets’ and shows the lowest price from each bucket bucket.
Each bucket has a unique combination of:
Sub-Condition (New, Like New, Very Good, Good, Acceptable)
FulfillmentChannel (FBA or Merchant-Fulfilled)
ShipsDomestically (True, False, Unknown)
ShippingTime (0-2 days, 3-7 days, 8-13 days, 14 or more days)
SellerPositiveFeedbackRating (98-100%, 95-97%, 90-94%, 80-89%,
70-79%, Less than 70%, Just launched)
Someone made a really cool demo of the API here
We cannot get the entire amazon products using API.They had made certain restrictions to the usage of API such that it would be more relevant to advertising use case only.
I wrote that small python module to achieve such a task: https://github.com/iMilnb/awstools/blob/master/mods/awsprice.py
Basically, it fetches the prices from Amazon's website and convert them to a nice and parsable python dict.
I wrote two example functions that show how to use the resulting dict to dump an instance price on various terms along with a CSV converter.
There is a reply to a similar question which lists all the .js files containing the prices, which are barely JSON files (with only a callback(...); statement to remove).
Here is an exemple for Linux On Demand prices : http://aws-assets-pricing-prod.s3.amazonaws.com/pricing/ec2/linux-od.js
(Get the full list directly on that reply)

Resources