I failed several test cases for an automated screening exam because my code ran too long. Is there a way to write this more efficiently?
The prompt was something like:
Write a program that takes as an input a list and returns the sum of all combinations of concatenating its elements pairwise.
For example, with the list [20, 5] this would be:
2020 + 205 + 520 + 55 = 2800
I still can't think of a way to do this without casting to string and back into int. The list comprehensions were previously nested for loops which performed worse but I still need more speed.
def concatenationsSum(a):
# turn into strings
a = [str(i) for i in a]
# concat
cartesian_product = [j + k for j in a for k in a]
# turn back into integers
total = [int(i) for i in cartesian_product]
return sum(total)
So i tried some optimazation in your code, your main bottleneck here is the casting to str and back to int so i modified that part
def concatenationsSum(a):
numDigits = {i: (10 ** (int(math.log10(i)) + 1)) for i in a}
cpro = product(a, a)
cartesian_product = [i * numDigits[x] + x for i, x in cpro]
return sum(cartesian_product)
here you can see i changed a few parts, i added a dictionary to lookup for each number the number of digits it needs to multiply, an example would be 5 returns 10 so when you have 20 and 5 you can do 20 * digits[5] + 5 = 205 that speeds up the whole proccess.
also no need to use a double for loop in the list comprehension python itertools provides product() which return the cartesian product.
Testing done: with small lists about 8 elements i went from 4.6e05 to 3.1e05 average and with bigger lists of 5400 elements it got from 11.7 seconds average to 5.3 seconds. That's about double the speed.
In the following code I want to get len(a) should be 1825 keeping step 0.01. But when I print len(a) it gives me 73. For getting length of 1825 I have to generate numbers from 2.275 to 3 with a step of 0.01 ,73 times. How can I do that? I tried to use np.linspace but that command doesn't work for this case.
a = np.arange(2.275, 3, 0.01)
Seems like you want to np.random.choice 1825 times
>>> a = np.arange(2.275,3,0.01)
>>> c = np.random.choice(a, 1825)
array([2.995, 2.545, 2.755, ..., 2.875, 2.275, 2.605])
>>> c.shape
(1825,)
Edit
If you want a repeated 25 times (i.e. 1825/73) in sequence, use np.tile()
target = 1825
n = target/len(a)
np.tile(a, int(n))
yields
array([2.275, 2.285, 2.295, ..., 2.975, 2.985, 2.995])
Here's a one liner, given a = np.arange(2.275, 3, 0.01) and n = 1825:
a = np.broadcast_to(a, (n // a.size + book(n % a.size), a.size)).ravel()[:n]
This uses np.broadcast_to to turn a into a matrix where it repeats itself enough times to fill 1825 elements. ravel then flattens the repeated list and the final slice chops off the unwanted elements. The ravel operation is what actually copies the list since the broadcast uses stride tricks to avoid copying the data.
I am using the gam function in the mgcv package to fit spatially adaptive smoothing for heterogeneous data. This is my R code for fitting.
library(MASS)
data(mcycle)
fit <- gam(accel ~ s(times, k = 20, bs = 'ad'), data = mcycle, method = 'REML')
The output contains 5 smoothing parameters. I am trying to extract the values for each smoothing parameter ( S[[i]] for $i =1,..5$) and I used fit$S[[1]] to get the first smoothing parameter values, but it does not work. Could someone help me with this?
You want the $sp component
> fit$sp
s(times)1 s(times)2 s(times)3 s(times)4 s(times)5
1.364206e+01 5.204389e-04 2.036490e-03 8.565542e+00 2.428618e+03
The $S component of the $smooth list contains the penalty matrices associated with the five smoothing parameters.
See ?gamObject and ?smooth.construct for further details on what is returned in the fit.
If you really want the penalty matrices, then look at the structure of the smooth component:
> str(fit$smooth, max = 1)
List of 1
$ :List of 26
..- attr(*, "class")= chr [1:2] "pspline.smooth" "mgcv.smooth"
..- attr(*, "qrc")=List of 4
.. ..- attr(*, "class")= chr "qr"
..- attr(*, "nCons")= int 1
Even if there is a single smooth, the $smooth is a list. So we need fit$smooth[[1]] to access this smooth. Now if we look at the $S component of the smooth we see
> str(fit$smooth[[1]]$S, max = 1)
List of 5
$ : num [1:19, 1:19] 0.4446 -0.2845 0.0913 0.0426 0.0943 ...
$ : num [1:19, 1:19] 0.3417 -0.2441 0.0845 0.0341 0.0654 ...
$ : num [1:19, 1:19] 0.0913 -0.0734 0.0271 0.0109 0.0141 ...
$ : num [1:19, 1:19] 4.13e-05 -3.46e-05 4.10e-05 1.32e-04 -3.96e-05 ...
$ : num [1:19, 1:19] 1.68e-06 2.43e-06 3.49e-06 4.62e-06 1.08e-05 ...
Which indicates that there are five penalty matrices associated with this smooth and that each matrix is a component of the S list. Hence, for the ith penalty matrix we need
fit$smooth[[1]]$S[[ i ]]
Hence for the second penalty matrix we need
fit$smooth[[1]]$S[[2]]
the first six rows and columns of which look like this
> fit$smooth[[1]]$S[[2]][1:6, 1:6]
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.34168394 -0.24407752 0.084500619 0.03412496 0.06538967 0.054028500
[2,] -0.24407752 0.36254851 -0.255915616 0.05368650 -0.03418746 -0.019895116
[3,] 0.08450062 -0.25591562 0.352961000 -0.21961696 0.04421239 0.001082056
[4,] 0.03412496 0.05368650 -0.219616955 0.35168761 -0.18138207 0.077301400
[5,] 0.06538967 -0.03418746 0.044212389 -0.18138207 0.25012833 -0.178018503
[6,] 0.05402850 -0.01989512 0.001082056 0.07730140 -0.17801850 0.264159096
Question 1:
I am trying to work with the plot() function on an AggExResult object and the clusters in the documentation (https://cran.r-project.org/web/packages/apcluster/apcluster.pdf) work as expected.
In my own data, I have an additional column in the input which provides a pre-defined “target” for classification purposes, and I am wondering if there is a way to have the dendogram labels highlighted by color (e.g. red=class 0, blue=class 1) with the class of the targets being factors (or characters). I am ultimately trying to visually display how many clusters contain "pure" vs. "mixed" classes. Here is some slightly modified code from the online documentation to show roughly what my input data looks like:
cl1Targ <- matrix(nrow=50,ncol=1)
for(c1t in 1:nrow(cl1Targ)){ cl1Targ[c1t] <- as.factor(0) }
cl2Targ <- matrix(nrow=50,ncol=1)
for(c2t in 1:nrow(cl2Targ)){ cl2Targ[c2t] <- as.factor(1) }
## create two Gaussian clouds
#cl1 <- cbind(rnorm(50,0.2,0.05),rnorm(50,0.8,0.06))
#cl2 <- cbind(rnorm(50,0.7,0.08),rnorm(50,0.3,0.05))
cl1 <- cbind(rnorm(50,0.2,0.05),rnorm(50,0.8,0.06),cl1Targ)
cl2 <- cbind(rnorm(50,0.7,0.08),rnorm(50,0.3,0.05),cl2Targ)
x <- rbind(cl1,cl2)
colnames(x) <- c('Column 1','Column 2','Class_ID')
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x, r=2)
## run affinity propagation
apres <- apcluster(sim, q=0.7)
## compute agglomerative clustering from scratch
aggres1 <- aggExCluster(sim)
## plot dendrogram
plot(aggres1, main='aggres1 w/ target') #
How would I color the dendogram by the target defined in the input?
Question 2:
When I show() the example data’s APResult, I see the following:
show(apres)
APResult object
Number of samples = 100
Number of iterations = 165
Input preference = -0.01281384
Sum of similarities = -0.1222309
Sum of preferences = -0.1409522
Net similarity = -0.2631832
Number of clusters = 11
Exemplars:
8 17 24 37 43 52 58 68 92 95 99
Clusters:
Cluster 1, exemplar 8:
7 8 9 25 31 36 39 42 47 48
Cluster 2, exemplar 17:
6 11 13 15 17 18 19 23 32 35
Cluster 3, exemplar 24:
2 5 10 24 45
When I use my own data, I see the following (the row.names, which are the drugs being clustered by gene expression mean fold change values)
show(apclr2q05_mean)
APResult object
Number of samples = 1045
Number of iterations = 429
Input preference = -390.0822
Sum of similarities = -89326.99
Sum of preferences = -83477.58
Net similarity = -172804.6
Number of clusters = 214
Exemplars:
amantadine_58mg6h_fc amiodarone_147mg3d_fc clarithromycin_56mg1d_fc fluconazole_394mg5d_fc ketoconazole_114mg5d_fc ketoconazole_2274mg1d_fc
pantoprazole_1100mg1d_fc pantoprazole_1100mg3d_fc quetiapine_500mg5d_fc roxithromycin_312mg5d_fc torsemide_3mg3d_fc acetazolamide_250mg3d_fc
Clusters:
Cluster 1, exemplar amantadine_58mg6h_fc:
amantadine_58mg6h_fc promazine_100mg1d_fc cyproteroneAcetate_2500mg6h_fc danazol_2g5d_fc ivermectin_7500ug1d_fc letrozole_250mg6h_fc
mefenamicAcid_93mg3d_fc olanzapine_23mg1d_fc secobarbital_20mg6h_fc zaleplon_100mg3d_fc
Cluster 2, exemplar amiodarone_147mg3d_fc:
amiodarone_147mg3d_fc amiodarone_147mg5d_fc aspirin_375mg5d_fc betaNapthoflavone_80mg5d_fc clofibrate_130mg3d_fc finasteride_800mg5d_fc
Cluster 3, exemplar clarithromycin_56mg1d_fc:
ciprofloxacin_72mg5d_fc ciprofloxacin_450mg6h_fc clarithromycin_56mg1d_fc clarithromycin_56mg3d_fc clarithromycin_56mg5d_fc
Cluster 4, exemplar fluconazole_394mg5d_fc:
fluconazole_394mg5d_fc
Also what I would expect in terms of content but I would like to format this for reporting purposes. I have tried to export this using dput() but I get a lot of extra unnecessary information in the output file. I am wondering how I might be able to export the same type of information from above along with the object name and target classifier mentioned above into a table that would look like the following (and add the name of the object to the output):
Name of object = apclr2q05_mean
Number of samples = 1045
Number of iterations = 429
Input preference = -390.0822
Sum of similarities = -89326.99
Sum of preferences = -83477.58
Net similarity = -172804.6
Number of clusters = 214
Exemplars: Target
amantadine_58mg6h_fc 1
amiodarone_147mg3d_fc 1
clarithromycin_56mg1d_fc 1
fluconazole_394mg5d_fc 0
ketoconazole_114mg5d_fc 0
ketoconazole_2274mg1d_fc 0
Clusters:
Cluster 1, exemplar amantadine_58mg6h_fc:
Drug Target
amantadine_58mg6h_fc 1
promazine_100mg1d_fc 1
cyproteroneAcetate_2500mg6h_fc 1
danazol_2g5d_fc 0
ivermectin_7500ug1d_fc 0
Cluster 2, exemplar amiodarone_147mg3d_fc:
Drug Target
Etc…
A big THANK YOU to Ulrich for his quick response to these questions by email and we wanted to share our discussion with the community so I will let him respond with his solution so that he gets the credit he deserves :-)
As an update, I tried to implement the answer to Question 1 and the sample code works as expected, but I am having trouble getting this to work on my data. The input data has two parts. The first is a matrix with the numeric measurement data including column and row labels:
> fci[1:3,1:3]
M30596_PROBE1 AI231309_PROBE1 NM_012489_PROBE1
amantadine_58mg1d_fc 0.05630744 -0.10441722 0.41873201
amantadine_58mg6h_fc -0.42780274 -0.26222322 0.02703001
amantadine_220mg1d_fc 0.35260779 -0.09902214 0.04067055
The second is the "target" values in Factor format, each of which corresponds to same row in fci above:
> targs[1:3]
amantadine_58mg1d_fc amantadine_58mg6h_fc amantadine_220mg1d_fc
0 0 0
Levels: 0 1
From here, the tree was built as below:
# build the AggExResult:
aglomr1 <- aggExCluster(negDistMat(r=2), fci)
# convert the data
tree <- as.dendrogram(aglomr1)
# assign the color codes
colorCodes <- c("0"="red", "1"="green")
names(targs) <- rownames(fci)
xColor <- colorCodes[as.character(targs)]
names(xColor) <- rownames(fci)
# plot the colored tree
labels_colors(tree) <- xColor[order.dendrogram(tree)]
plot(tree, main="Colored Tree")
The tree was generated but the leaves were not colored. Doing some digging:
> head(xColor)
0 0 0 0 0 0
"red" "red" "red" "red" "red" "red"
That part seems to work as expected in terms of the targets having the correct colors assigned, but the rownames are not in xColor, and the line labels_colors(tree) <- xColor[order.dendrogram(tree)] does not return similar labels, but rather what appear to be row numbers, or NAs:
> head(order.dendrogram(tree))
[1] "295" "929" "488" "493" "233" "235"
> head(labels_colors(tree))
295 929 488 493 233 235
> head(xColor[order.dendrogram(tree)])
<NA> <NA> <NA> <NA> <NA> <NA>
NA NA NA NA NA NA
How would I get the line labels_colors(tree) <- xColor[order.dendrogram(tree)] to behave in the same way as the example provided? Specifically, what I am trying to show is the leaf lables such as amantadine_58mg1d_fc being highlighted in the color that corresponds to the target (0/1).
Here is my answer to your Question 1: the plot() method for 'AggExResult' objects internally uses the plot.dendrogram() method. Since this method does not allow for coloring leaves of dendrograms, this will not work. However, there is the 'dendextend' package which offers such a functionality. (BTW, I found that solution in another thread: Label and color leaf dendrogram in r) Since 'apcluster' offers some casts to 'hclust' and 'dendrogram' objects, this package's functionality can be used more or less directly.
So, here is some sample code:
library(apcluster)
## create two Gaussian clouds along with class labels 0/1
cl1 <- cbind(rnorm(50, 0.2, 0.05), rnorm(50, 0.8, 0.06))
cl2 <- cbind(rnorm(50, 0.7, 0.08), rnorm(50, 0.3, 0.05))
x <- cbind(Columns=data.frame(rbind(cl1, cl2)),
"Class_ID"=factor(as.character(c(rep(0, 50), rep(1, 50)))))
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x[, 1:2], r=2)
## compute agglomerative clustering from scratch
aggres1 <- aggExCluster(sim)
## load 'dendextend' package
## install.packages("dendextend") ## if not yet installed
library(dendextend)
## convert object
tree <- as.dendrogram(aggres1)
## assign color codes
colorCodes <- c("0"="red", "1"="green")
xColor <- colorCodes[x$Class_ID]
names(xColor) <- rownames(x)
## plot color-labeled tree
labels_colors(tree) <- xColor[order.dendrogram(tree)]
plot(tree)
Here is my answer to your Question 2: Sorry, no such functionality is implemented in the 'apcluster' package. And since this is quite a special request, I am reluctant to include it the package (let alone the fact that show() methods cannot have additional arguments). So, alternatively, I want to provide you with a custom function that allows for labeling/grouping exemplars and samples:
library(apcluster)
## create two Gaussian clouds along with class labels 0/1
cl1 <- cbind(rnorm(50, 0.2, 0.05), rnorm(50, 0.8, 0.06))
cl2 <- cbind(rnorm(50, 0.7, 0.08), rnorm(50, 0.3, 0.05))
x <- cbind(Columns=data.frame(rbind(cl1, cl2)),
"Class_ID"=factor(as.character(c(rep(0, 50), rep(1, 50)))))
## compute similarity matrix (negative squared Euclidean)
sim <- negDistMat(x[, 1:2], r=2)
## special show() function with labeled data
show.ExClust.labeled <- function(object, labels=NULL)
{
if (!is(object, "ExClust"))
stop("'object' is not of class 'ExClust'")
if (is.null(labels))
{
show(object)
return(invisible(NULL))
}
cat("\n", class(object), " object\n", sep="")
if (!is.finite(object#l) || !is.finite(object#it))
stop("object is not result of an affinity propagation run; ",
"it is pointless to create 'APResult' objects yourself.")
cat("\nNumber of samples = ", object#l, "\n")
if (length(object#sel) > 0)
{
cat("Number of sel samples = ", length(object#sel),
paste(" (", round(100*length(object#sel)/object#l,1),
"%)\n", sep=""))
cat("Number of sweeps = ", object#sweeps, "\n")
}
cat("Number of iterations = ", object#it, "\n")
cat("Input preference = ", object#p, "\n")
cat("Sum of similarities = ", object#dpsim, "\n")
cat("Sum of preferences = ", object#expref, "\n")
cat("Net similarity = ", object#netsim, "\n")
cat("Number of clusters = ", length(object#exemplars), "\n\n")
if (length(object#exemplars) > 0)
{
if (length(names(object#exemplars)) == 0)
{
cat("Exemplars:\n")
df <- data.frame("Sample"=object#exemplars,
Label=labels[object#exemplars])
print(df, row.names=FALSE)
for (i in 1:length(object#exemplars))
{
cat("\nCluster ", i, ", exemplar ",
object#exemplars[i], ":\n", sep="")
df <- data.frame(Sample=object#clusters[[i]],
Label=labels[object#clusters[[i]]])
print(df, row.names=FALSE)
}
}
else
{
df <- data.frame("Exemplars"=names(object#exemplars),
Label=labels[names(object#exemplars)])
print(df, row.names=FALSE)
for (i in 1:length(object#exemplars))
{
cat("\nCluster ", i, ", exemplar ",
names(object#exemplars)[i], ":\n", sep="")
df <- data.frame(Sample=names(object#clusters[[i]]),
Label=labels[names(object#clusters[[i]])])
print(df, row.names=FALSE)
}
}
}
else
{
cat("No clusters identified.\n")
}
}
## create label vector (with proper names)
label <- x$Class_ID
names(label) <- rownames(x)
## run apcluster()
apres <- apcluster(sim, q=0.3)
## show with labels
show.ExClust.labeled(apres, label)
I was wondering if someone can show me the steps into developing a 4x4 transformation matrix that can be used as the viewing transformation.
The camera is at (1, 2, 2)^T
The camera is pointed at the direction (0, 1, 0)^T
The up-vector, which will be mapped to the positive y direction on the image, is the direction (0; 0; 1)^T.
I've looked through my notes and do not understand how to solve these types of problems as I know they are quite common in computer graphics.
You can use the formulas here, just filling in the matrices and multiply each matrix one after the other until you've built up your transformation matrix. (The rotation matrices there may be wrong so double check the formulas here.)
What type of problems are you trying to solve? You didn't really ask a narrow question.
The camera position would be set with a Translation matrix:
[1 0 0 X]
[0 1 0 Y]
[0 0 1 Z]
[0 0 0 1]
substituting [1,2,2]^T for [X,Y,Z]^T
would give you a Translation matrix:
[1 0 0 1]
[0 1 0 2]
[0 0 1 2]
[0 0 0 1]
This can be multiplied by an input vector
[x y z 1]^T
to transform that point, like so:
[1 0 0 1] [x] = x+1
[0 1 0 2] [y] = y+2
[0 0 1 2] [z] = z+2
[0 0 0 1] [1] = 1
For input vector [4,5,6,1] this would yield [5,7,8,1].
See, it just moves or translates the input x,y,z point by the X,Y,Z we plugged in above (ignoring the last component for now).
Remember that a matrix M multiplied by a vector v gives you a vector, call it p
p = M v
think of this as calling a function, sort of like p = sin(x) but instead p = M(v) where M is a transformation function, it happens to be in the form of matrix since the transformations we care about can be represented strictly by linear operators, a fancy way of saying a matrix multiplication, which is just a fancy way of saying the sum of 4 scalar multiplications. To chain these matrix transformations as if they were function calls, just multiply them one after another. (Note that this is a simplification since we need to do division to do perspective transformations, so that's why we cheat and do tricks with a 4x4 matrix instead of a just 3x3 -- that's what the weird term "homogeneneous coordinates" means.)
Does your class have a textbook or lecture notes (if it's online can you link to it)? I would imagine the materials would cover the other transformations and possibly provide examples. You can try it, multiply some vector v = [-9 -8 -7] by the 4x4 matrix above and see what [x y z w] vector you get out of it. Then try plugging in other values for the rotation matrices.
You may run in to tricky bits where you need to multiply the rotation matrix by the translation matrix in the right order: R T would be a different matrix than T R if the translation matrix is any other than 0,0,0.