Python - export the final random forests tree for Graphviz - python-3.x

I have a Python code with a decision tree and random forests. The decision tree finds the biggest contributor using:
contr = decisiontree.feature_importances_.max() * 100
contr_full = decisiontree.feature_importances_ * 100
#Showing name
location = pd.to_numeric(np.where(contr_full == contr)[0][0])
result = list(df_dmy)[location + 1]
This returns the biggest contributor in my dataset and is then exported to a Graphviz format using:
tree.export_graphviz(rpart, out_file=path_file + '\\Decision Tree Code for Graphviz.dot', filled=True,
feature_names=list(df_dmy.drop(['Reason of Removal'], axis=1).columns),
impurity=False, label=None, proportion=True,
class_names=['Unscheduled', 'Scheduled'], rounded=True)
In the case of random forests, I have managed to export every tree that is used there (100 trees):
i = 0
for tree_data in rf.estimators_:
with open('tree_' + str(i) + '.dot', 'w') as my_file:
my_file = tree.export_graphviz(tree_data , out_file = my_file)
i = i + 1
This, of course, generates 100 word files with the different trees. Not every tree however contains the information that is needed, since some trees show a different result. I do know the biggest contributor of the classifier, but I also want to see the decision tree with that result.
What I tried was:
i= 0
for tree_data in rf.estimators_:
#Feature importance
df_trees = tree_data.tree_.threshold
contr = df_trees.max() * 100
contr_full = df_trees * 100
#Showing name
location = pd.to_numeric(np.where(contr_full == contr)[0][0])
result = print(list(df_dmy)[location + 1])
Using this, I get the error:
IndexError: list index out of range
for which I have no idea what is wrong here.
I wanted a dataframe of biggest contributors together with their contributing factors in order to filter this to the actual biggest contributor and biggest contribution. See example:
Result (in a dataframe) =
Result Contribution
0 Car 0.74
1 Bike 0.71
2 Car 0.79
Python knows already that the result from random forests gave 'car' as the biggest contributor, the first filter is to remove everything except 'car':
Result Contribution
0 Car 0.74
2 Car 0.79
Then it has to search for the highest contribution and retrieve the index.
Result Contribution
2 Car 0.79
Then it has to export the tree information corresponding to that index.
I know it is quite a long story, but I hope someone knows how to finish this code.
Regards, Ganesh

names = []
contributors = []
df = pd.DataFrame(columns=['Parameter', 'Value'])
for tree_data in rf.estimators_:
#Feature importance
df_trees = tree_data.tree_.threshold
contr = tree_data.feature_importances_.max() * 100
contr_full = tree_data.feature_importances_ * 100
contr_location = pd.to_numeric(np.where(contr_full == contr)[0][0])
names.append(list(titanic_dmy.columns)[contr_location + 1])
contributors.append(contr)
df['Parameter']=np.array(names)
df['Value']=np.array(contributors)
idx = df.index[df['Value'] == df['Value'].loc[df['Value'].idxmax()]].tolist()[0]
#Export to Graphviz
tree.export_graphviz(rf.estimators_[idx], out_file=path_file + '\\RF Decision Tree for Graphviz.dot',
filled=True, max_depth=graphviz_leafs, feature_names=list(titanic_dmy.drop(['survived'],
axis=1).columns), impurity=False, label=None, proportion=True,
class_names=['Unscheduled', 'Scheduled'], rounded=True, precision=2)

Related

Canonical correlation analysis on covariance matrices instead of raw data

Due to privacy issues I don't have the original raw data matrices, but instead I can have covariance matrices of x and y (x'x, y'y, x'y) datasets or the correlation matrix between the two of them (or any other sort of matrix that is not the original data matrix).
I need to find a way to apply canonical correlation analysis directly on those matrices. Browsing the net I didn't find any solution to my problem. I want to ask if there is already an implemented algorithm able to work on these data, in R would be the best, but other languages are ok
Example from the tutorial in R for cca package: (https://stats.idre.ucla.edu/r/dae/canonical-correlation-analysis/)
mm <- read.csv("https://stats.idre.ucla.edu/stat/data/mmreg.csv")
colnames(mm) <- c("Control", "Concept", "Motivation", "Read", "Write", "Math",
"Science", "Sex")
You divide the dataset into x and y :
x <- mm[, 1:3]
y <- mm[, 4:8]
Then the function works taking as input these two datasets: cc(x,y) (note that the function standardizes the data by itself).
What I want to know if there is a way to perform cca starting by centering matrices around the mean:
x = scale(x, scale = F)
y = scale(Y, scale = F)
An then computing the covariance matrices x'x, y'y, xy'xy:
cvx = crossprod(x); cvy = crossprod(y); cvxy = crossprod(x,y)
And the algorithm should take in input those matrices to work and compute the canonical variates and correlation coefficients
like: f(cvx, cvy, cvxy)
In this article is written a solution starting from covariance matrices for example, but I don't if it is just theory or someone has actually implemented it
http://graphics.stanford.edu/courses/cs233-20-spring/ReferencedPapers/CCA_Weenik.pdf
I hope to be exhaustive enough!
In short: the correlation are using internally in most (probably all) CCA analysis.
In long: you will need to work out a bit how to do that depending on the case. Let me show you below a example.
What is Canonical-correlation analysis (CCA)?
Canonical-correlation analysis (CCA): help you to identify the best possible linear relations you could create between two datasets. See wikipedia. See references for examples. I will follow this post for the data and use libraries.
Set up libraries, upload the data, select some variables, removed nans, estandarizad the data.
import pandas as pd
import numpy as np
df = pd.read_csv('2016 School Explorer.csv')
# choose relevant features
df = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %','Average ELA Proficiency',
'Average Math Proficiency']]
df.corr()
# drop missing values
df = df.dropna()
# separate X and Y groups
X = df[['Rigorous Instruction %',
'Collaborative Teachers %',
'Supportive Environment %',
'Effective School Leadership %',
'Strong Family-Community Ties %',
'Trust %'
]]
Y = df[['Average ELA Proficiency',
'Average Math Proficiency']]
for col in X.columns:
X[col] = X[col].str.strip('%')
X[col] = X[col].astype('int')
# Standardise the data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler(with_mean=True, with_std=True)
X_sc = sc.fit_transform(X)
Y_sc = sc.fit_transform(Y)
What are Correlations?
I am pausing here to talk about the idea and the implementation.
First of all CCA analysis is naturally based on that idea however for the numerical resolution there are different ways to do that.
The definition from wikipedia. See the pic:
I am talking about this because I am going to modify a function of that library and I want you to really pay attention to that.
See Eq 4 in Bilenko et al 2016. But you need to be really careful with how to place that well.
Notice that strictly speaking you do not need the correlations.
Let me show the the function that is working out that expression, in pyrrcca library here
def kcca(data, reg=0., numCC=None, kernelcca=True,
ktype='linear',
gausigma=1.0, degree=2):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp
The output from here the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
I just want you to pay attention to this:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
That is exactly the place where that operation happens. Notice that is not exactly the definition from Wikipedia, however is mathematically equivalent.
Calculation of the correlations
I am going to calculate the correlations as in wikipedia but later I will modify that function, so it is going to bit a couple of details, to make sure this is answering the original questions clearly.
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
print(crosscovs)
[array([[1217. , 746.04496925, 736.14178336, 575.21073838,
517.52474332, 641.25363806],
[ 746.04496925, 1217. , 732.6297358 , 1094.38480773,
572.95747557, 1073.96490387],
[ 736.14178336, 732.6297358 , 1217. , 559.5753228 ,
682.15312862, 774.36607617],
[ 575.21073838, 1094.38480773, 559.5753228 , 1217. ,
495.79248754, 1047.31981248],
[ 517.52474332, 572.95747557, 682.15312862, 495.79248754,
1217. , 632.75610906],
[ 641.25363806, 1073.96490387, 774.36607617, 1047.31981248,
632.75610906, 1217. ]]), array([[367.74099904, 391.82683717],
[348.78464015, 355.81358426],
[440.88117453, 514.22183796],
[326.32173163, 311.97282341],
[216.32441793, 269.72859023],
[288.27601974, 304.20209135]]), array([[367.74099904, 348.78464015, 440.88117453, 326.32173163,
216.32441793, 288.27601974],
[391.82683717, 355.81358426, 514.22183796, 311.97282341,
269.72859023, 304.20209135]]), array([[1217. , 1139.05867099],
[1139.05867099, 1217. ]])]
Have a look to the output, I am going to change that a bit so is between -1 and 1. Again, this modification is minor. Following the definition from wikipedia the authors just care about the numerator, and I am just going to include now the denominator.
max_unit = 0
for crosscov in crosscovs:
max_unit = np.max([max_unit,np.max(crosscov)])
"""I normalice"""
crosscovs_new = []
for crosscov in crosscovs:
crosscovs_new.append(crosscov/max_unit)
print(crosscovs_new)
[array([[1. , 0.6130197 , 0.60488232, 0.47264646, 0.4252463 ,
0.52691342],
[0.6130197 , 1. , 0.6019965 , 0.89924799, 0.47079497,
0.88246911],
[0.60488232, 0.6019965 , 1. , 0.45979895, 0.56052024,
0.63629094],
[0.47264646, 0.89924799, 0.45979895, 1. , 0.40738906,
0.86057503],
[0.4252463 , 0.47079497, 0.56052024, 0.40738906, 1. ,
0.51993107],
[0.52691342, 0.88246911, 0.63629094, 0.86057503, 0.51993107,
1. ]]), array([[0.30217009, 0.32196125],
[0.28659379, 0.29236942],
[0.36226884, 0.42253232],
[0.26813618, 0.25634579],
[0.17775219, 0.22163401],
[0.2368743 , 0.24996063]]), array([[0.30217009, 0.28659379, 0.36226884, 0.26813618, 0.17775219,
0.2368743 ],
[0.32196125, 0.29236942, 0.42253232, 0.25634579, 0.22163401,
0.24996063]]), array([[1. , 0.93595618],
[0.93595618, 1. ]])]
For clarity I will show you in a slightly different way to see that the numbers and indeed correlations of the original data.
df.corr()
Average ELA Proficiency Average Math Proficiency
Average ELA Proficiency 1.000000 0.935956
Average Math Proficiency 0.935956 1.000000
That is a way to see as well the variables name. I just want to show you that the numbers above make sense, and are what you are calling correlations.
Calculations of the CCA
So now I will just modify a bit the function kcca from pyrrcca. The idea is for that function to accept the previously calculated correlations matrixes.
from rcca import _make_kernel
from scipy.linalg import eigh
def kcca_working(data, reg=0.,
numCC=None,
kernelcca=False,
ktype='linear',
gausigma=1.0,
degree=2,
crosscovs=None):
"""Set up and solve the kernel CCA eigenproblem
"""
if kernelcca:
kernel = [_make_kernel(d, ktype=ktype, gausigma=gausigma,
degree=degree) for d in data]
else:
kernel = [d.T for d in data]
nDs = len(kernel)
nFs = [k.shape[0] for k in kernel]
numCC = min([k.shape[1] for k in kernel]) if numCC is None else numCC
if crosscovs is None:
# Get the auto- and cross-covariance matrices
crosscovs = [np.dot(ki, kj.T) for ki in kernel for kj in kernel]
# Allocate left-hand side (LH) and right-hand side (RH):
LH = np.zeros((sum(nFs), sum(nFs)))
RH = np.zeros((sum(nFs), sum(nFs)))
# Fill the left and right sides of the eigenvalue problem
for i in range(nDs):
RH[sum(nFs[:i]) : sum(nFs[:i+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = (crosscovs[i * (nDs + 1)]
+ reg * np.eye(nFs[i]))
for j in range(nDs):
if i != j:
LH[sum(nFs[:j]) : sum(nFs[:j+1]),
sum(nFs[:i]) : sum(nFs[:i+1])] = crosscovs[nDs * j + i]
LH = (LH + LH.T) / 2.
RH = (RH + RH.T) / 2.
maxCC = LH.shape[0]
r, Vs = eigh(LH, RH, eigvals=(maxCC - numCC, maxCC - 1))
r[np.isnan(r)] = 0
rindex = np.argsort(r)[::-1]
comp = []
Vs = Vs[:, rindex]
for i in range(nDs):
comp.append(Vs[sum(nFs[:i]):sum(nFs[:i + 1]), :numCC])
return comp, crosscovs
Let run the function:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = crosscovs_new)
print(comp)
[array([[-0.00375779, 0.0078263 ],
[ 0.00061439, -0.00357358],
[-0.02054012, -0.0083491 ],
[-0.01252477, 0.02976148],
[ 0.00046503, -0.00905069],
[ 0.01415084, -0.01264106]]), array([[ 0.00632283, 0.05721601],
[-0.02606459, -0.05132531]])]
So I take the original function, and make possible to introduce the correlations, I also output that just for checking.
I print the Canonical Covariates (comp), those are a and b in Eq4 in Bilenko et al 2016.
Comparing results
Now I am going to compare results from the original and the modified function. I will show you that the results are equivalent.
I could obtain the original results this way. With crosscovs = None, so it is calculated as originally, instead of us introducing it:
comp, crosscovs = kcca_working([X_sc, Y_sc], reg=0.,
numCC=2, kernelcca=False, ktype='linear',
gausigma=1.0, degree=2, crosscovs = None)
print(comp)
[array([[-0.13109264, 0.27302457],
[ 0.02143325, -0.12466608],
[-0.71655285, -0.2912628 ],
[-0.43693303, 1.03824477],
[ 0.01622265, -0.31573818],
[ 0.49365965, -0.44098996]]), array([[ 0.2205752 , 1.99601077],
[-0.90927705, -1.79051045]])]
I print the Canonical Covariates (comp), those are a' and b' in Eq4 in Bilenko et al 2016.
a, b and a', b' are different but they are just different in the scale, so for all purpose they are equivalent. This is because of the correlations definitions.
To show that let me pick up numbers from each case and calculate the ratio:
print(0.00061439/-0.00375779)
-0.16349769412340764
print(0.02143325/-0.13109264)
-0.16349697435340382
They are the same result.
When that is modified you could just build in the top of that.
References:
Cool post with example and explanations in Python, using library pyrcca: https://towardsdatascience.com/understanding-how-schools-work-with-canonical-correlation-analysis-4c9a88c6b913
Bilenko, Natalia Y., and Jack L. Gallant. "Pyrcca: regularized kernel canonical correlation analysis in python and its applications to neuroimaging." Frontiers in neuroinformatics 10 (2016): 49. Paper in which pyrcca is explained: https://www.frontiersin.org/articles/10.3389/fninf.2016.00049/full

How to define log-count ratio for multiclass text dataset (fastai)?

I am trying to follow Rachel Thomas path of sentiment classification with Naive Bayes. In the video she uses a binary dataset (pos. and neg. movie reviews). When it comes to apply Naive Bayes, this is what she does:
Defintion: log-count ratio r for each word f:
r = log (ratio of feature f in positive documents) / (ratio of feature f in negative documents)
where ratio of feature $f$ in positive documents is the number of times a positive document has a feature divided by the number of positive documents.
p1 = np.squeeze(np.asarray(x[y.items==positive].sum(0)))
p0 = np.squeeze(np.asarray(x[y.items==negative].sum(0)))
pr1 = (p1+1) / ((y.items==positive).sum() + 1)
pr0 = (p0+1) / ((y.items==negative).sum() + 1)
r = np.log(pr1/pr0)
--> it is very simple to apply the log-count-ratio to a dataset with 2 labels!
Problem:
My dataset is not binary! Lets assume I have 5 labels: label_1,...,label_5
How do I get the log-count ratio r for multilabel dataset?
My approach:
p4 = np.squeeze(np.asarray(x[y.items==label_5].sum(0)))
p3 = np.squeeze(np.asarray(x[y.items==label_4].sum(0)))
p2 = np.squeeze(np.asarray(x[y.items==label_3].sum(0)))
p1 = np.squeeze(np.asarray(x[y.items==label_2].sum(0)))
p0 = np.squeeze(np.asarray(x[y.items==label_1].sum(0)))
log-count-ratio:
pr1 = (p1+1) / ((y.items==label_2).sum() + 1)
pr1_not = (p1+1) / ((y.items!=label_2).sum() + 1)
r_1 = np.log(pr1/pr1_not)
log-count-ratio:
pr2 = (p2+1) / ((y.items==label_3).sum() + 1)
pr2_not = (p2+1) / ((y.items!=label_3).sum() + 1)
r_2 = np.log(pr2/pr2_not)
...
Is this correct? Does it mean I get multiple ratios?
Yes this is correct. The “negative class” is basically all the classes but the one you are considering. So yes, you will get multiple ratios (as many as number of classes you have).
From https://marvinlsj.github.io/2018/11/23/NBSVM%20for%20sentiment%20and%20topic%20classification/
, the log-count-ratio is derived from Posterior prob ratio which is good for comparing 2 classes to get insight into which is the most probable. I guess you're trying to do one-vs-one method for multi-class problem. This will end up with 5x4/2=10 pairs of ratios for classification. If you'd like to do classification only, we normally compute Posterior prob for each class and select the best. So in your case, you just select the best from sum(log(p1)), sum(log(p2)), ..., sum(log(p5)).

How I can plot multiple roc together?

I want to find some good predictors (genes). This is my data, log transformed RNA-seq:
TRG CDK6 EGFR KIF2C CDC20
Sample 1 TRG12 11.39 10.62 9.75 10.34
Sample 2 TRG12 10.16 8.63 8.68 9.08
Sample 3 TRG12 9.29 10.24 9.89 10.11
Sample 4 TRG45 11.53 9.22 9.35 9.13
Sample 5 TRG45 8.35 10.62 10.25 10.01
Sample 6 TRG45 11.71 10.43 8.87 9.44
I have calculated confusion matrix for different models like below
1- I tested each of 23 genes individually in this code and each of them gives p-value < 0.05 remained as a good predictor; For example for CDK6 I have done
glm=glm(TRG ~ CDK6, data = df, family = binomial(link = 'logit'))
Finally I obtained five genes and I put them in this model:
final <- glm(TRG ~ CDK6 + CXCL8 + IL6 + ISG15 + PTGS2 , data = df, family = binomial(link = 'logit'))
I want a plot like this for ROC curve of each model but I don't know how to do that
Any help please?
I will give you an answer using the pROC package. Disclaimer: I am the author and maintiner of the package. There are alternative ways to do it.
The plot your are seeing was probably generated by the ggroc function of pROC. In order to generate such a plot from glm models, you need to 1) use the predict function to generate the predictions, 2) generate the roc curves and store them in a list, preferably named to get a legend automatically, and 3) call ggroc.
glm.cdk6 <- glm(TRG ~ CDK6, data = df, family = binomial(link = 'logit'))
final <- glm(TRG ~ CDK6 + CXCL8 + IL6 + ISG15 + PTGS2 , data = df, family = binomial(link = 'logit'))
rocs <- list()
library(pROC)
rocs[["CDK6"]] <- roc(df$TRG, predict(glm.cdk6))
rocs[["final"]] <- roc(df$TRG, predict(final))
ggroc(rocs)

updating centroids in k-means Python

I'm implementing the K means algorithm in python and I got stuck in
the part in which we suppose to update the centroids.
I have created something that works but its really not python-like.
I know it can be written better and would love for some suggestions
for example how to improve the histogram that counts how many points
are assigned to each centroid.
Here is my code:
def updateCentroids(centroids, pixelList):
k = len(centroids)
centoidsCount = [0]*k #couts how many pixels classified for each cent.
centroidsSum = np.zeros([k, 3])#sum value of centroids
for pixel in pixelList:
index = 0
#find whitch centroid equals
for centroid in centroids:
if np.array_equal(pixel.classification, centroid):
centoidsCount[index] += 1
centroidsSum[index] += pixel.point
break
index += 1
index = 0
for centroid in centroidsSum:
centroids[index] = centroid/centoidsCount[index]
index += 1

How to set LpVariable and Objective Function in pulp for LPP as per the formula?

I want to calculate the Maximised value of the particular user based on his Interest | Popularity | both Interest and Popularity using following Linear Programming Problem(LPP) equation
using pulp package in python3.7.
I have 4 lists
INTEREST = [5,10,15,20,25]
POPULARITY = [4,8,12,16,20]
USER = [1,2,3,4,5]
cost = [2,4,6,8,10]
and 2 variable values as
e=0.5 ; e may take (0 or 1 or 0.5)
budget=20
and
i=0 to n ; n is length of the list
means, the summation want to perform for all list values.
Here, if e==0 means Interest will 0 ; if e==1 means Popularity will 0 ; if e==0.5 means Interest and Popularity will be consider for Max Value
Also xi takes 0 or 1; if xi==1 then the user will be consider else if xi==0 then the user will not be consider.
and my pulp code as below
from pulp import *
INTEREST = [5,10,15,20,25]
POPULARITY = [4,8,12,16,20]
USER = [1,2,3,4,5]
cost = [2,4,6,8,10]
e=0.5
budget=10
#PROBLEM VARIABLE
prob = LpProblem("MaxValue", LpMaximize)
# DECISION VARIABLE
int_vars = LpVariable.dicts("Interest", INTEREST,0,4,LpContinuous)
pop_vars = LpVariable.dicts("Popularity",
POPULARITY,0,4,LpContinuous)
user_vars = LpVariable.dicts("User",
USER,0,4,LpBinary)
#OBJECTIVE fUNCTION
prob += lpSum(USER(i)((INTEREST[i]*e for i in INTEREST) +
(POPULARITY[i]*(1-e) for i in POPULARITY)))
# CONSTRAINTS
prob += USER(i)cost(i) <= budget
#SOLVE
prob.solve()
print("Status : ",LpStatus[prob.status])
# PRINT OPTIMAL SOLUTION
print("The Max Value = ",value(prob.objective))
Now I am getting 2 errors as
1) line 714, in addInPlace for e in other:
2) line 23, in
prob += lpSum(INTEREST[i]e for i in INTEREST) +
lpSum(POPULARITY[i](1-e) for i in POPULARITY)
IndexError: list index out of range
What I did wrong in my code. Guide me to resolve this problem. Thanks in advance.
I think I finally understand what you are trying to achieve. I think the problem with your description is to do with terminology. In a linear program we reserve the term variable for those variables which we want to be selected or chosen as part of the optimisation.
If I understand your needs correctly your python variables e and budget would be considered parameters or constants of the linear program.
I believe this does what you want:
from pulp import *
import numpy as np
INTEREST = [5,10,15,20,25]
POPULARITY = [4,8,12,16,20]
COST = [2,4,6,8,10]
N = len(COST)
set_user = range(N)
e=0.5
budget=10
#PROBLEM VARIABLE
prob = LpProblem("MaxValue", LpMaximize)
# DECISION VARIABLE
x = LpVariable.dicts("user_selected", set_user, 0, 1, LpBinary)
# OBJECTIVE fUNCTION
prob += lpSum([x[i]*(INTEREST[i]*e + POPULARITY[i]*(1-e)) for i in set_user])
# CONSTRAINTS
prob += lpSum([x[i]*COST[i] for i in set_user]) <= budget
#SOLVE
prob.solve()
print("Status : ",LpStatus[prob.status])
# PRINT OPTIMAL SOLUTION
print("The Max Value = ",value(prob.objective))
# Show which users selected
x_soln = np.array([x[i].varValue for i in set_user])
print("user_vars: ")
print(x_soln)
Which should return the following, i.e. with these particular parameters only the last user is selected for inclusion - but this decision will change - for example if you increase the budget to 100 all users will be selected.
Status : Optimal
The Max Value = 22.5
user_vars:
[0. 0. 0. 0. 1.]

Resources