I am trying to obtain a logical vector from two other logical vectors connected through elementwise '&':
//[[Rcpp::export]]
arma::uvec test1(arma::vec t1, double R1, double R2){
arma::uvec t = (t1 >= R1) & (t1 < R2);
return t;
}
It returns the following error while I try to compile
error: no match for 'operator&' (operand types are 'arma::enable_if2<true, const arma::mtOp<unsigned int, arma::Col<double>, arma::op_rel_gteq_post> >::result {aka const arma::mtOp<unsigned int, arma::Col<double>, arma::op_rel_gteq_post>}' and 'arma::enable_if2<true, const arma::mtOp<unsigned int, arma::Col<double>, arma::op_rel_lt_post> >::result {aka const arma::mtOp<unsigned int, arma::Col<double>, arma::op_rel_lt_post>}')
arma::uvec t = (t1 >= R1) & (t1 < R2);
^
I have no idea what was going on. I am guessing that Armadillo does things differently. But I can't find any sources to help me clear things out. Any help would be appreciated! Thank you very much!
I have no idea what was going on. I am guessing that Armadillo does things differently. But I can't find any sources to help me clear things out.
The ultimate source here is the Armadillo docs. If you go down to the section on operators, you'll see that the & operator is not one of those listed among the "[o]verloaded operators for Mat, Col, Row and Cube classes." So, if you want such an operator you'll have to code it up yourself (or see if someone else already has it floating around the internet). There is such an operator for Rcpp::NumericVectors:
#include <Rcpp.h>
// [[Rcpp::export]]
Rcpp::LogicalVector test1(const Rcpp::NumericVector& t1, double R1, double R2){
return (t1 >= R1) & (t1 < R2);
}
test1(1:10, 3, 7)
# [1] FALSE FALSE TRUE TRUE TRUE TRUE FALSE FALSE
# [9] FALSE FALSE
Of course, that doesn't help much if the rest of your code really relies on Armadillo.
Update: Just Use &&
As pointed out by mtall in the comments, the && operator actually is available, even though it's not discussed in the Armadillo docs (maybe it's not as ultimate of a source as I thought).
So, just change your code to the following:
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
//[[Rcpp::export]]
arma::uvec test1(arma::vec t1, double R1, double R2){
arma::uvec t = (t1 >= R1) && (t1 < R2);
return t;
}
and it works how I believe you want it to based on your question and response to comments:
test1(1:10, 3, 7)
[,1]
[1,] 0
[2,] 0
[3,] 1
[4,] 1
[5,] 1
[6,] 1
[7,] 0
[8,] 0
[9,] 0
[10,] 0
Related
I am facing a very similar issue to these questions:
convert Rcpp::NumericVector to Eigen::VectorXd
Converting between NumericVector/Matrix and VectorXd/MatrixXd in Rcpp(Eigen) to perform Cholesky solve
I am writing an R-package, that uses the RcppEigen library for Matrix arithmetic. My problem is that the project is not compiling because of an error in the conversion of the Rcpp::NumericMatrix input to an Eigen::MatrixXd.
The file looks like this:
#include <map>
#include <Rcpp.h>
...
using namespace Eigen;
...
// [[Rcpp::export]]
Rcpp::List my_function(Rcpp::NumericMatrix input_matrix)
{
...
Map<MatrixXd> GATE_matrixx(Rcpp::as<Map<MatrixXd> >(GATE_matrix));
...
}
This gives me the following error:
Myfile.cpp:40:65: required from here
C:/Users/User/AppData/Local/Programs/R/R-4.2.2/library/Rcpp/include/Rcpp/internal/Exporter.h:31:31:error:
matching function for call to 'Eigen::Map<Eigen::Matrix<double, -1,
-1> >::Map(SEXPREC*&) 31 | Exporter( SEXP x ) : t(x) | ^ In file included from
C:/Users/User/AppData/Local/Programs/R/R-4.2.2/library/RcppEigen/include/Eigen/Core:19
from
C:/Users/User/AppData/Local/Programs/R/R-4.2.2/library/RcppEigen/include/Eigen/SparseCore:11
from
C:/Users/User/AppData/Local/Programs/R/R-4.2.2/library/RcppEigen/include/Eigen/Sparse:26
from Myfile.cpp:8
I have also tried to change the line to:
MatrixXd input_matrix_eigen(Rcpp::as\<MatrixXd\>(input_matrix));
This gives me the equivalent error :
Myfile.cpp:40:54: required from here
C:/Users/User/AppData/Local/Programs/R/R
4.2.2/library/RcppEigen/include/Eigen/src/Core/Matrix.h:332:31:error: matching function for call to 'Eigen::Matrix<double, -1,
-1>::_init1<SEXPREC*>(SEXPREC* const&)
Do you have any ideas?
If more information is required to evaluate the issue, just let me know.
If you use the command
RcppEigen::RcppEigen.package.skeleton("demoPackage")
an example package demoPackage is created for you which you can install the usual way. It contains example functions to create amd return a matrix:
// [[Rcpp::export]]
Eigen::MatrixXd rcppeigen_hello_world() {
Eigen::MatrixXd m1 = Eigen::MatrixXd::Identity(3, 3);
// Eigen::MatrixXd m2 = Eigen::MatrixXd::Random(3, 3);
// Do not use Random() here to not promote use of a non-R RNG
Eigen::MatrixXd m2 = Eigen::MatrixXd::Zero(3, 3);
for (auto i=0; i<m2.rows(); i++)
for (auto j=0; j<m2.cols(); j++)
m2(i,j) = R::rnorm(0, 1);
return m1 + 3 * (m1 + m2);
}
If you set the same seed as I do you should get the same matrix
> set.seed(42)
> demoPackage::rcppeigen_hello_world()
[,1] [,2] [,3]
[1,] 8.11288 -1.694095 1.089385
[2,] 1.89859 5.212805 -0.318374
[3,] 4.53457 -0.283977 10.055271
>
The ++ and -- operators are included in many other languages. Why did the language designers choose not to include these operators in Rust?
They are not included in Rust as they can lead to subtle bugs because they require complex knowledge about evaluation order, especially when combined into larger expressions as shown below. Can you guess what these two C++ programs print? I guessed wrong.
#include <cstdio>
int main()
{
int a = 4;
int b = 4;
int c = a++ + b++;
printf ("%i %i %i", a, b, c);
return 0;
}
#include <cstdio>
int main()
{
int x = 10;
int z = ++x + x++;
printf ("%i %i", x, z);
return 0;
}
From the FAQ:
Why doesn't Rust have increment and decrement operators?
Preincrement and postincrement (and the decrement equivalents), while
convenient, are also fairly complex. They require knowledge of
evaluation order, and often lead to subtle bugs and undefined behavior
in C and C++. x = x + 1 or x += 1 is only slightly longer, but
unambiguous.
I have a few questions about setting up NLopt with non-linear constraints:
If the number of constraints is bigger than the number of variables, how can we set grad[ ] in the constraint function? Is there any (automatic) method to solve the problem without introducing Lagrangian multiplier?
Using a Lagrangian multiplexer, I know we can solve the problem. However the use of Lagrangian multiplexer we have to obtain my_constraint_data manually, which make it difficult to solve large-scale problem.
For example, suppose I want to minimize the function
f(x1,x2) = -((x1)^3)-(2*(x2)^2)+(10*(x1))-6-(2*(x2)^3)
subject to the following constraints:
Constraint 1: c1 = 10-(x1)*(x2) >= 0
Constraint 2: c2 = ((x1)*(x2)^2)-5 >= 0
Constraint 3: c3 = (x2)-(x1)*(x2)^3 >= 0
In NLopt tutorial, we know that grad[0] = d(c1)/d(x1) and grad[1] = d(c2)/d(x2) as the gradient of constraints. Then, we set grad as follows:
double myconstraint(unsigned n, const double *x, double *grad, void *data) {
my_constraint_data *d = (my_constraint_data *)data;
if (grad) {
grad[0] = -x[1]; //grad[0] = d(c1)/dx[1]
grad[1] = 2*x[0]+x[1]; //grad[1] = d(c2)/dx[2]
grad[2] = ???; //grad[2] = d(c3)/dx[3] but we only have 2 variable (x1)&(x2)
}
return (10-x[0]*x[1], x[0]*x[1]*x[1]-5, x[1]-x[0]*x[1]*x[1]*x[1];
}
The problem is we do not know how to set grad[ ] (especially for c3) if the number of constraints are larger than the number of variables.
Of course we can solve the problem with non-automatic method like below by using Lagrangian multiplexer (l1, l2, l3) where
grad[0] = -l1*(d(c1)/d(x1))-l2*(d(c2)/d(x1))-l3*(d(c)/d(x1))
and
grad[1] = -l1*(d(c1)/d(x2))-l2*(d(c2)/d(x2))-l3*(d(c)/d(x3))
double myconstraint(unsigned n, const double *x, double *grad, void *data) {
my_constraint_data *d = (my_constraint_data *)data;
//set l1, l2, and l3 as parameter of lagrangian multiplier
double l1=d->l1,l2=d->l2,l3=d->l3;
++count;
if (grad) {
grad[0] = l1*x[1]-l2*x[1]*x[1]-l3*x[1]*x[1]*x[1];
grad[1] = l1*x[0]-2*l2*x[0]*x[1]-l3+3*l3*x[0]*x[1]*x[1];
}
return (10-x[0]*x[1], x[0]*x[1]*x[1]-5, x[1]-x[0]*x[1]*x[1]*x[1]);
}
Meanwhile, it is not easy to apply non-automatic method into large-scale problem because it will be inefficient and complicated in programming.
Is there any method to solve nonlinear simultaneous equations using NLopt? (When Lagrangian multiplexer is applied in case of the number of constraints are larger than the number of variables, nonlinear simultaneous equations should be solved.).
We appreciate for your answer. It will be really helpful to us. Thank you for all your kindness.
I think you've got the constraints and the variables you are minimizing mixed up. If I understand your question correctly, you need to create three separate constraint functions for your three constraints. For example:
double c1(unsigned n, const double *x, double *grad, void *data)
{
/* Enforces the constraint
*
* 10 - x1*x2 >= 0
*
* Note we compute x1*x2 - 10 instead of 10 - x1*x2 since nlopt expects
* inequality constraints to be of the form h(x) <= 0. */
if (grad) {
grad[0] = x[1]; // grad[0] = d(c1)/dx1
grad[1] = x[0]; // grad[1] = d(c1)/dx2
}
return x[0]*x[1] - 10;
}
double c2(unsigned n, const double *x, double *grad, void *data)
{
/* Enforces the constraint
*
* x1*x2^2 - 5 >= 0
*
* Note we compute -x1*x2^2 - 5 instead of x1*x2^2 - 5 since nlopt expects
* inequality constraints to be of the form h(x) <= 0. */
if (grad) {
grad[0] = -x[1]*x[1];
grad[1] = -2*x[0]*x[1];
}
return -x[0]*x[1]*x[1] + 5;
}
Then, in your main function you need to add each inequality constraint separately:
int main(int argc, char **argv)
{
// set up nlopt here
/* Add our constraints. */
nlopt_add_inequality_constraint(opt, c1, NULL, 1e-8);
nlopt_add_inequality_constraint(opt, c2, NULL, 1e-8);
// etc.
}
I have a list of Numeric Vector and I need a List of unique elements. I tried Rcpp:unique fonction. It works very well when apply to a Numeric Vector but not to List. This is the code and the error I got.
List h(List x){
return Rcpp::unique(x);
}
Error in dyn.load("/tmp/RtmpDdKvcH/sourceCpp-x86_64-pc-linux-gnu-1.0.0/sourcecpp_272635d5289/sourceCpp_10.so") :
unable to load shared object '/tmp/RtmpDdKvcH/sourceCpp-x86_64-pc-linux-gnu-1.0.0/sourcecpp_272635d5289/sourceCpp_10.so':
/tmp/RtmpDdKvcH/sourceCpp-x86_64-pc-linux-gnu-1.0.0/sourcecpp_272635d5289/sourceCpp_10.so: undefined symbol: _ZNK4Rcpp5sugar9IndexHashILi19EE8get_addrEP7SEXPREC
It is unclear what you are doing wrong, and it is an incomplete / irreproducible question.
But there is a unit test that does just what you do, and we can do it by hand too:
R> Rcpp::cppFunction("NumericVector uq(NumericVector x) { return Rcpp::unique(x); }")
R> uq(c(1.1, 2.2, 2.2, 3.3, 27))
[1] 27.0 1.1 3.3 2.2
R>
Even if there isn't a matching Rcpp sugar function, you can call R functions from within C++. Example:
#include <Rcpp.h>
using namespace Rcpp;
Rcpp::Environment base("package:base");
Function do_unique = base["unique"];
// [[Rcpp::export]]
List myfunc(List x) {
return do_unique(x);
}
Thank you for being interested to this issue.
As I notified that, my List contains only NumericVector. I propose this code that works very well and faster than unique function in R. However its efficiency decreases when the list is large. Maybe this can help someone. Moreover, someone can also optimise this code.
List uniqueList(List& x) {
int xsize = x.size();
List xunique(x);
int s = 1;
for(int i(1); i<xsize; ++i){
NumericVector xi = x[i];
int l = 0;
for(int j(0); j<s; ++j){
NumericVector xj = x[j];
int xisize = xi.size();
int xjsize = xj.size();
if(xisize != xjsize){
++l;
}
else{
if((sum(xi == xj) == xisize)){
goto notkeep;
}
else{
++l;
}
}
}
if(l == s){
xunique[s] = xi;
++s;
}
notkeep: 0;
}
return head(xunique, s);
}
/***R
x <- list(1,42, 1, 1:3, 42)
uniqueList(x)
[[1]]
[1] 1
[[2]]
[1] 42
[[3]]
[1] 1 2 3
microbenchmark::microbenchmark(uniqueList(x), unique(x))
Unit: microseconds
expr min lq mean median uq max neval
uniqueList(x) 2.382 2.633 3.05103 2.720 2.8995 29.307 100
unique(x) 2.864 3.110 3.50900 3.254 3.4145 24.039 100
But R function becomes faster when the List is large. I am sure that someone can optimise this code.
I was implementing quicksort and I wished to set the pivot to be the median or three numbers. The three numbers being the first element, the middle element, and the last element.
Could I possibly find the median in less no. of comparisons?
median(int a[], int p, int r)
{
int m = (p+r)/2;
if(a[p] < a[m])
{
if(a[p] >= a[r])
return a[p];
else if(a[m] < a[r])
return a[m];
}
else
{
if(a[p] < a[r])
return a[p];
else if(a[m] >= a[r])
return a[m];
}
return a[r];
}
If the concern is only comparisons, then this should be used.
int getMedian(int a, int b , int c) {
int x = a-b;
int y = b-c;
int z = a-c;
if(x*y > 0) return b;
if(x*z > 0) return c;
return a;
}
int32_t FindMedian(const int n1, const int n2, const int n3) {
auto _min = min(n1, min(n2, n3));
auto _max = max(n1, max(n2, n3));
return (n1 + n2 + n3) - _min - _max;
}
You can't do it in one, and you're only using two or three, so I'd say you've got the minimum number of comparisons already.
Rather than just computing the median, you might as well put them in place. Then you can get away with just 3 comparisons all the time, and you've got your pivot closer to being in place.
T median(T a[], int low, int high)
{
int middle = ( low + high ) / 2;
if( a[ middle ].compareTo( a[ low ] ) < 0 )
swap( a, low, middle );
if( a[ high ].compareTo( a[ low ] ) < 0 )
swap( a, low, high );
if( a[ high ].compareTo( a[ middle ] ) < 0 )
swap( a, middle, high );
return a[middle];
}
I know that this is an old thread, but I had to solve exactly this problem on a microcontroller that has very little RAM and does not have a h/w multiplication unit (:)). In the end I found the following works well:
static char medianIndex[] = { 1, 1, 2, 0, 0, 2, 1, 1 };
signed short getMedian(const signed short num[])
{
return num[medianIndex[(num[0] > num[1]) << 2 | (num[1] > num[2]) << 1 | (num[0] > num[2])]];
}
If you're not afraid to get your hands a little dirty with compiler intrinsics you can do it with exactly 0 branches.
The same question was discussed before on:
Fastest way of finding the middle value of a triple?
Though, I have to add that in the context of naive implementation of quicksort, with a lot of elements, reducing the amount of branches when finding the median is not so important because the branch predictor will choke either way when you'll start tossing elements around the the pivot. More sophisticated implementations (which don't branch on the partition operation, and avoid WAW hazards) will benefit from this greatly.
remove max and min value from total sum
int med3(int a, int b, int c)
{
int tot_v = a + b + c ;
int max_v = max(a, max(b, c));
int min_v = min(a, min(b, c));
return tot_v - max_v - min_v
}
There is actually a clever way to isolate the median element from three using a careful analysis of the 6 possible permutations (of low, median, high). In python:
def med(a, start, mid, last):
# put the median of a[start], a[mid], a[last] in the a[start] position
SM = a[start] < a[mid]
SL = a[start] < a[last]
if SM != SL:
return
ML = a[mid] < a[last]
m = mid if SM == ML else last
a[start], a[m] = a[m], a[start]
Half the time you have two comparisons otherwise you have 3 (avg 2.5). And you only swap the median element once when needed (2/3 of the time).
Full python quicksort using this at:
https://github.com/mckoss/labs/blob/master/qs.py
You can write up all the permutations:
1 0 2
1 2 0
0 1 2
2 1 0
0 2 1
2 0 1
Then we want to find the position of the 1. We could do this with two comparisons, if our first comparison could split out a group of equal positions, such as the first two lines.
The issue seems to be that the first two lines are different on any comparison we have available: a<b, a<c, b<c. Hence we have to fully identify the permutation, which requires 3 comparisons in the worst case.
Using a Bitwise XOR operator, the median of three numbers can be found.
def median(a,b,c):
m = max(a,b,c)
n = min(a,b,c)
ans = m^n^a^b^c
return ans