Uisng compile, Theano generates C++ code along with Cuda code and compiled lib.
Can we reuse those ones after ?
You could load them as a python module from the cache or just copy the code somewhere else and modify it to have a friendlier name.
Also there is a project to have Theano produce a some C code for a library that I know works for some situations but has never really been finished. If you are interested in that I would ask on the theano-dev mailing list.
Related
Background:
I'm currently writing unittests for a library that is capable of starting other binaries, guaranteeing the binary will die after a timeout on linux.
The unittest is currently done by calling a binary that would normally sleep for 10 seconds and then create a file containing some data. The binary should be killed before those 10 seconds meaning the file should not exist had the timeout functioned. The path to that binary is currently hardcoded which is not what I want.
What I need help with:
Problem is I want to have access to such a binary when the crate is compiled, and then pass its path to the library being tested (thus being able to call said binary using execve syscall without hardcoding its location allowing other users of my crate to compile it). This means I need to somehow have a binary generated or grabbed during compilation and somehow have access to its path inside my unittest. Is there any decent approach to doing this?
The code for the binary can be written in whatever language as long as it works. But preferably rust or C/C++. Worst case it can be precompiled but I'd like to have it compiled on the fly so it works on ARM or other architectures aswell
What have I tried:
The current method is to simply hardcode the binary path and compile it manually using G++. This is not optimal however since if anyone downloads my crate from crates.io they won't have that binary and thus cannot pass its unittests.
I have been messing around with cc in build.rs, generating C++ code and then compiling it, but CC appears to be for compiling libraries which is not what I want since it attempts to link the binaries with the library (I believe thats what it's doing), and I have been googling for a few hours without finding any approach to solve this problem.
I'm using a self-written model in Rcpp which works fine when using Rcpp 1.0.6 or 1.0.5. But after updating to Rcpp 1.0.7 the model run crashes right after executing the R function to start the model. (However, compilation with sourceCpp() works without any error or warning.)
The code in Rcpp is organized as follows: there are several functions written in different c++-files and these functions are loaded with header files into my runModel.cpp file that defines my function that is exported to R to run the code.
This function is used like this runModel(DateVector SimPeriod, List ModelInput, NumericVector Settings). Maybe it is worth noting that functions in different c++-files are using the same variables and changing them sometimes, so I had to write also something like initModel.cpp and a correspondent header file which is imported in almost every c++ files.
I was already looking in the https://cran.r-project.org/web/packages/Rcpp/news.html to relate the made changes in 1.0.7 to my issue. But unfortunately, I have no idea what might be the reason for the crash. I'm appreciating every comment on this.
I'm sorry that I cannot give a reproducible example but the model code is too complex to create one (especially because I do not know where the error is hidden.)
Is there method to compile the theano function into a C (or C++) source code so that I can modify it and port it to other platform?
I have a look in ~/.theano. The content there is not really helpful.
The directory consists of a lot of tmpXXXX sub-directories and mod.cpp. They are not really readable.
Basically I just need symbolic differentiation and output the result as C source code. If theano does not help, are there any other framework / libraries that could do the job?
Perhaps it's just better to describe my problem.
I'm developing a Haskell library. But part of the library is written in C, and another part actually in raw LLVM. To actually get GHC to spit out the code I want I have to follow this process:
Run ghc -emit-llvm on both the code that uses the Haskell module and the "Main" module.
Run clang -emit-llvm on the C file
Now I've got three .ll files from above. I add the part of the library I've handwritten in raw LLVM and llvm-link these into one .ll file.
I then run LLVM's opt on the linked file.
Lastly, I feed the LLVM bitcode fileback into GHC (which pleasantly accepts it) and produces an executable.
This process (with appropriate optimisation settings of course) seems to be the only way I can inline code from C, removing the function call overhead. Since many of these C functions are very small this is significant.
Anyway, I want to be able to distribute the library and for users to be able to use it as painlessly as possible, whilst still gaining the optimisations from the process above. I understand it's going to be a bit more of a pain than an ordinary library (for example, you're forced to compile via LLVM) but as painlessly as possible is what I'm looking for advice for.
Any guidance will be appreciated, I don't expect a step by step answer because I think it will be complex, but just some ideas would be helpful.
I'm trying to cheaply and accurately predict all the SystemVerilog dependencies for a build flow. It is ok to over-predict the dependencies and find a few Verilog files that aren't sv dependencies, but I don't want to miss any dependencies.
Do I actually have to parse the Verilog in order to determine all its dependencies? There are tick-include preprocessor macros, but those tick-include don't seem to load all the code currently getting compiled. There is a SYSTEM\_VERILOG\_PATH environment variable. Do I need to parse every SystemVerilog file in that SYSTEM\_VERILOG\_PATH variable in order to determine which modules are defined in which files?
One good way (if this is synthesizable code) is to use your synthesis tool file list (e.g. .qsf for Altera). That tends to be complete, but if it isn't, you can look at the build log for missing files that it found.
From a readily compiled environment it is possible to dump the source files
(e.g. Cadence
-- To list source files used by the snapshot 'worklib.top:snap'
% ncls -source -snapshot worklib.top:snap
)
but if you are starting from scratch I am afraid there is no easy solution. I would go for the pragmatic one: have a config file with all the directories that contain .sv files and then compile everything in it. If your project has a proper file structure, you could also modularize this by supplying config files for every major block.
Hope that helps.
I know Questa has a command line option where it will generate a makefile for you with all the dependencies in it after you have compiled your design. I'm not sure if the other simulators have that.
Another option is to browse and dump your compiled library in your simulator. You probably won't get the actual filenames the modules are compiled from, but it'll be a lot easier to parse all your verilog files for the module names that show up in the compiled library.