Predicting most relevant specialty using word embeddings - nlp

I have a dataframe (df) (about 60k rows) that contains the columns doctor_name and specialty(small sample given below).
doctor_name
Specialty
A
Dentist
B
Cardiologist
C
Physician
D
Surgeon
E
Ophthalmologist
Given a phrase (For example, " I am having severe chest pain", the respective doctor (Doctor B) having specialty as cardiology should be notified. Came across a few suggestions to use Word Embeddings and perform cosine similarity. Expecting the model to return the top 3 relevant specialties. Would appreciate some reference or sample code to assist with the same.

Related

How join two dataframes with multiple overlap in pyspark

Hi I have a dataset of multiple households where all people within households have been matched between two datasources. The dataframe therefore consists of a 'household' col, and two person cols (one for each datasource). However some people (like Jonathan or Peter below) where not able to be matched and so have a blank second person column.
Household
Person_source_A
Person_source_B
1
Oliver
Oliver
1
Jonathan
1
Amy
Amy
2
David
Dave
2
Mary
Mary
3
Lizzie
Elizabeth
3
Peter
As the dataframe is gigantic, my aim is to take a sample of the unmatched individuals, and then output a df that has all people within households where only sampled unmatched people exist. Ie say my random sample includes Oliver but not Peter, then I would only household 1 in the output.
My issue is I've filtered to take the sample and now am stuck making progress. Some combination of join, agg/groupBy... will work but I'm struggling. I add a flag to the sampled unmatched names to identify them which i think is helpful...
My code:
# filter to unmatched people
df_unmatched = df.filter(col('per_A').isNotNull()) & col('per_B').isNull())
# take random sample of 10%
df_unmatched_sample = df_unmatched.sample(0.1)
# add flag of sampled unmatched persons
df_unmatched_sample = df_unmatched.withColumn('sample_flag', lit('1'))
As it pertains to your intent:
I just want to reduce my dataframe to only show the full households of
households where an unmatched person exists that has been selected by
a random sample out of all unmatched people
Using your existing approach you could use a join on the Household of the sample records
# filter to unmatched people
df_unmatched = df.filter(col('per_A').isNotNull()) & col('per_B').isNull())
# take random sample of 10%
df_unmatched_sample = df_unmatched.sample(0.1).select("Household").distinct()
desired_df = df.join(df_unmatched_sample,["Household"],"inner")
Edit 1
In response to op's comment:
Is there a slightly different way that keeps a flag to identify the
sampled unmatched person (as there are some households with more than
one unmatched person)?
A left join on your existing dataset after adding the flag column to your sample may help you to achieve this eg:
# filter to unmatched people
df_unmatched = df.filter(col('per_A').isNotNull()) & col('per_B').isNull())
# take random sample of 10%
df_unmatched_sample = df_unmatched.sample(0.1).withColumn('sample_flag', lit('1'))
desired_df = (
df.alias("dfo").join(
df_unmatched_sample.alias("dfu"),
[
col("dfo.Household")==col("dfu.Household") ,
col("dfo.per_A")==col("dfu.per_A"),
col("dfo.per_B").isNull()
],
"left"
)
)

How to find row and column names of the n-th highest values in a 2D array in Excel?

I want to find the row and column names of the n-th highest values in a 2D array in Excel.
My array has a header row (the Coins) and a header column (the Markets). The data itself displays if a coin is supported on the market and if so what the approximate return of investment (ROI) will be in percent.
Example
An example of the array could look like this:
ROI
Coin A
Coin B
Coin C
Market 1
N/A
7.8%
5.7%
Market 2
0.4%
6.8%
N/A
Market 3
0.45%
7.6%
12.3%
Pay attention: So some values are also set to N/A (or is there a better way to display that a market doesn't support a specific coin? I don't want to enter 0% as it makes it harder to spot is a coin is supported by the market. I also don't want to leave that field blank because then I don't know if I already checked that market for that coin.)
Preferred output
The output for the example table from above with n=3 should then look like this (from high ROI to low):
Coin
Market
ROI
C
3
12.3%
B
1
7.8%
A
3
0.45%
Requirements
Each coin must only be shown once. So, for example, Coin B must not be listed twice in the Top3 output (once for Market 1: 7.8% and once for Market 3: 7.6%)
What I tried
So I thought about how to split up that problem into smaller parts. I think, it will come to these main parts:
find header/row name
here I found something to find the column name for the highest value per row but I wasn't able to adapt it to a working solution for a 2D array
find max in 2D array
here they describe to find the max value in a 2D array but not how to find the n-th highest values
find n-th highest values
here is a good explanation on how to find the highest n values of a 1D array but not how to apply that for a 2D array
only include each coin once
So I really tried to solve this myself but I struggle with adding these different parts together.

Use KDTree/KNN Return Closest Neighbors

I have two python pandas dataframes. One contains all NFL Quarterbacks' College Football statistics since 2007 and a label on the type of player they are (Elite, Average, Below Average). The other dataframe contains all of the college football qbs' data from this season along with a prediction label.
I want to run some sort of analysis to determine the two closest NFL comparisons for every college football qb based on their labels. I'd like to add to two comparable qbs as two new columns to the second dataframe.
The feature names in both dataframes are the same. Here is what the dataframes look like:
Player Year Team GP Comp % YDS TD INT Label
Player A 2020 ASU 12 65.5 3053 25 6 Average
For the example above, I'd like two find the two closest neighbors to Player A that also have the label "Average" from the first dataframe.
The way I thought of doing this was to use Scipy's KDTree and run a query tree:
tree = KDTree(nfl[features], leafsize=nfl[features].shape[0]+1)
closest = []
for row in college.iterrows():
distances, ndx = tree.query(row[features], k=2)
closest.append(ndx)
print(closest)
However, the print statement returned an empty list. Is this the right way to solve my problem?
.iterrows(), will return namedtuples (index, Series) where index is obviously the index of the row, and Series is the features values with the index of those being the columns names (see below).
As you have it, row is being stored as that tuple, so when you have row[features], that won't really do anything. What you're really after is that Series which the features and values Ie row[1]. So you can either call that directly, or just break them up in your loop by doing for idx, row in df.iterrows():. Then you can just call on that Series row.
Scikit learn is a good package here to use (actually built on Scipy so you'll notice same syntax). You'll have to edit the code to your specifications (like filter to only have the "Average" players, maybe you are one-hot encoding the category columns and in that case may need to add that to the features,etc.), but to give you an idea (And I made up these dataframes just for an example...actually the nfl one is accurate, but the college completely made up), you can see below using the kdtree and then taking each row in the college dataframe to see which 2 values it's closest to in the nfl dataframe. I obviously have it print out the names, but as you can see with print(closest), the raw arrays are there for you.
import pandas as pd
nfl = pd.DataFrame([['Tom Brady','1999','Michigan',11,61.0,2217,16,6,'Average'],
['Aaron Rodgers','2004','California',12,66.1,2566,24,8,'Average'],
['Payton Manning','1997','Tennessee',12,60.2,3819,36,11,'Average'],
['Drew Brees','2000','Perdue',12,60.4,3668,26,12,'Average'],
['Dan Marino','1982','Pitt',12,58.5,2432,17,23,'Average'],
['Joe Montana','1978','Notre Dame',11,54.2,2010,10,9,'Average']],
columns = ['Player','Year','Team','GP','Comp %','YDS','TD','INT','Label'])
college = pd.DataFrame([['Joe Smith','2019','Illinois',11,55.6,1045,15,7,'Average'],
['Mike Thomas','2019','Wisconsin',11,67,2045,19,11,'Average'],
['Steve Johnson','2019','Nebraska',12,57.3,2345,9,19,'Average']],
columns = ['Player','Year','Team','GP','Comp %','YDS','TD','INT','Label'])
features = ['GP','Comp %','YDS','TD','INT']
from sklearn.neighbors import KDTree
tree = KDTree(nfl[features], leaf_size=nfl[features].shape[0]+1)
closest = []
for idx, row in college.iterrows():
X = row[features].values.reshape(1, -1)
distances, ndx = tree.query(X, k=2, return_distance=True)
closest.append(ndx)
collegePlayer = college.loc[idx,'Player']
closestPlayers = [ nfl.loc[x,'Player'] for x in ndx[0] ]
print ('%s closest to: %s' %(collegePlayer, closestPlayers))
print(closest)
Output:
Joe Smith closest to: ['Joe Montana', 'Tom Brady']
Mike Thomas closest to: ['Joe Montana', 'Tom Brady']
Steve Johnson closest to: ['Dan Marino', 'Tom Brady']

How can I use Prophet for per-location forecasting given overall sales data

I am using Prophet for Sales forecasting, and I have a several CSVs. Most of the represent sales data by date for a specific location (e.g. "Location1.CSV has "Jan 1, 2010, X widgets sold", etc.)
There's a master CSV which aggregates sales across all locations. I have used Prophet to forecast Sales across all locations and that works well, but the per-location data is very variable.
I'm seeing much higher Mean Average Errors (MAE) for per-store forecasts while the overall model has much lower MAE.
Is there any way I can use the overall Sales model to try to predict per-location sales? Or any alternatives to forecasting per-location Sales besides just using the raw sales data for that location?
Yes, you can use your overall sales model to help predict the per-location sales in Prophet using the add_regressor method.
Let's first create a sample df, where y is the variable we want to predict (per-location sales) and overalls are the overall sales:
import pandas as pd
df = pd.DataFrame(pd.date_range(start="2019-09-01", end="2019-09-30", freq='D', name='ds'))
df["y"] = range(1,31)
df["overalls"] = range(101,131)
df.head()
ds y overalls
0 2019-09-01 1 101
1 2019-09-02 2 102
2 2019-09-03 3 103
3 2019-09-04 4 104
4 2019-09-05 5 105
and split train and test:
df_train = df.loc[df["ds"]<"2019-09-21"]
df_test = df.loc[df["ds"]>="2019-09-21"]
Before training the forecaster, we can add regressors that use additional variables. Here the argument of add_regressor is the column name of the additional variable in the training df.
from fbprophet import Prophet
m = Prophet()
m.add_regressor('overalls')
m.fit(df_train)
The predict method will then use the additional variables to forecast:
forecast = m.predict(df_test.drop(columns="y"))
Note that the additional variables should have values for your future (test) data. As you initially don't have the future overall sales, you could start by predicting overalls with univariate timeseries, and then predict y with add_regressor and the predicted overalls as future values of the additional variable.
See also this notebook, with an example of using weather factors as extra regressors in a forecast of bicycle usage.

Read a text file and store it in the hashmap

I am reading a file called Expenses.txt...I want to store it in a hashmap with repeated entries of items
The text file contains data on several lines, where each line (a record) consists of two fields: category name (a string), and its value (a number). For example, the file below shows expenses by category.
Input
Expenses.txt
cosmetics 100.00
medicines 120.00
cosmetics 50.00
books 250.00
medicines 80.00
medicines 100.00
program should generate a Summary report showing the sums and averages by category, sorted by category. The summary should be displayed on the console. The program should prompt the user and read in the name of the input file.
For example, for the above data, the summary will be:
output
Category Total Average
books $250 $250.00
medicines $300.00 $100.00
cosmetics $150.00 $75.00
a) The first field is a string and the second field is a floating point number.
b) The number of records for each category may vary. For example, in the above example, there are 2 records for cosmetics, 3 for medicines and 1 for books.
c) The total number of records (lines) may vary. Do not limit them to any fixed number.
d) The records are not in any sorted order.
It really depends on the language you are using, but I would recommend you using some kind of structure of tuple to save in the hashmap. You can read each line, split each of them in two (for the label and the value), and check if the label is already in the hashmap. If it is, just increment by one the number of units, as well as summing the coast.
At the end, just do a hashmap transversal and print all the values needed.

Resources