I know about sys.stdout.write() but that uses the sys module,
input("string") woulden't be valid either since it's a built-in function.
It used to be possible in Python 2, because print was a language statement, while it is a built in function in Python 3. But there is only a very tiny difference between statements and built in functions or types. And IMHO trying to avoid built ins is close to non sense in Python.
I'm learning to use Rcpp in R. Would you please explain me the difference between R::runif() and Rcpp::runif().
I mean 3 questions:
Do these 2 functions produce the same stream of random numbers given that we set the same seed before running each of them ?
Which function is preferable when using Rcpp ? I mean, it seems to me that the 2 functions produce the same thing, but Rcpp::runif() will run more fastly.
How to call Rcpp::runif() in a .R file ? Is it true that the Rcpp::runif() can be called only from a .cpp file and not in R? (I mean, it seems to me that the function Rcpp::runif() is of extensively used to write other C++ functions, then I will import that function by sourcecpp() to use in R)
Thank you very much for your help!
I suspect this question is a duplicate so I may close this but here goes:
Yes they do. The whole point of the RNG interfaces is guaranteeing just that
Entirely up to you. Sometimes you want to wrap or use a C API example and you have R::runif() for that. Sometimes you want efficient vector operations for which you have Rcpp::runif().
You write a C++ function accessing the C++ API. Note that not all those functions will be faster than calling what R offers when what R offers is already vectorised. Your wrapping of Rcpp::runif() will not be much different in performance from calling stats::runif(). You use the C++ accessor in C++ code writing something you cannot easily get from R.
Edit: This Rcpp Gallery post has some background and examples.
I read about the Pytorch's source code, and I find it's weird that it doesn't implement the convolution_backward function, The only convolution_backward_overrideable function is directly raises an error and supposed not to fall here.
So I referred to CuDNN / MKLDNN implementation, they both implements functions like cudnn_convolution_backward.
I got the following question:
What are the native implementation of CUDA/ CPU? I can find something like thnn_conv2d_backward_out, but I could not find where are this is called.
Why PyTorch didn't put the convolution_backward function in Convolution.cpp? It offers an _convolution_double_backward() function. But this is the double backward, it's the gradient of gradient. Why don't they offer a single backward function?
If I want to call the native convolution/ convolution_backward function for my pure cpu/cuda tensor, how should I write code? Or where could I refer to? I couldn't find example for this.
Thanks !
1- Implementation may differ depending on which backend you use, it may use CUDA convolution implementation from some library, CPU convolution implementation from some other library, or custom implementation, see here: pytorch - Where is “conv1d” implemented?.
2- I am not sure about the current version, but single backward was calculated via autograd, that is why there was not an explicit different function for it. I don't know the underlying details of autograd but you can check https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/autograd.cpp. That double_backward function is only there if you need higher order derivatives.
3- If you want to do this in C, the file you linked (convolution.cpp) shows you how to do this (function at::Tensor _convolution...). If you inspect the function you see it just checks which implementation to use (params.use_something...) and use it. If you want to do this in python you should start tracing from conv until where this file convolution.cpp is called.
I have figured something addition to #unlut's post.
The convolution method are in separate files for different implementations. You may find cudnn_convoluton_backward or mkldnn_convolution_backward easily. One tricky thing is that the final native fall function is hard to find. It is because currently Pytorch Teams are porting Thnn function to ATen, you could refer to PR24507.
The native function could be find as thnn_con2d_backward.
The convolution backward is not calculated via autograd, rather, there must a conv_backward function and this must be recorded in derivatives.yaml. If you want to find specific backward function, refer to that file is a good start.
About this code, if you want to directly call thnn_backward function, you need to explicitly construct finput and fgrad_input. These are two empty tensor offering as a buffer.
at::Tensor finput = at::empty({0},input.options());
at::Tensor fgrad_input = at::empty({0}, input.options());
auto kernel_size = weight.sizes().slice(2);
auto &&result = at::thnn_conv2d_backward(grad_output, input, weight,kernel_size , stride, padding,
finput, fgrad_input, output_mask);
Is it possible to have exponential terms in objective function in Gurobi?
I want to optimize the following objective function:
min_x L2_norm(x-y) + log(b+ exp(k+x))
Gurobi does provide support for log and exponential functions in constraints, but I couldn't find anything for the objective function.
I am using Gurobi's python API.
The solution to formulate this is to add a new auxiliary variable and assign the value of the expression to this variable. This post in the Gurobi Community explains it pretty well.
I am looking for a Python class that:
Is supported by Python 2.7
Acts as an in-memory pipe with separate read and write pointers
Is thread-safe
Ideally, has methods that resemble the methods on regular file objects (read, write, etc.)
Is ideally already in the Python standard library or at least accessible from pip
Does such a creature exist? Maybe BufferedRWPair, but the documentation for it is tragic:
https://docs.python.org/2/library/io.html#io.BufferedRWPair
It turns out I was looking for https://docs.python.org/2/library/os.html#os.mkfifo . It does exactly what I need.