I don't know if the title is phrased very well but I'll try my best to explain what I'm trying to achieve.
I have a project consisting of two crates, main-program and patch, the goal of the program is to capture audio from other processes on the system.
patch is a library crate that compiles to a DLL with a detour to hook into the system audio functions and sends the audio data over IPC.
in main-program I have some code that does the injection, as well as to receive the data over IPC.
Currently, I just have batch script that calls cargo build for each crate, and then copies the DLL and EXE to my output folder.
Now, what I want to do, is break out the code that does the injection and the receiving of data, and together with the patch crate, I want to create a library, my-audio-capture-lib that I can publish for use by others.
The optimal result would be that someone can add my-audio-capture-lib to their cargo.toml as a dependency, specify somewhere what filename they want the DLL to have, and then call a function like my-audio-capture-lib::capture_audio_from_pid in their code to recieve audio data. And when they build their project they should get their binary, as well as the DLL from my crate.
This however requires that at some point during the build process, my-audio-capture-lib produces the necessary DLL for injection. And I don't know how to do that, or if it's even possible to do.
Related
We have a commercially sold application that is presently written in Java and Python. We are currently looking at moving to Rust for performance and non-crashy reasons.
In our present Java/Python architecture, we have a feature that manages customisations that particular customers want. This involves placing Java jars/classes and python files under a specific folder designated for customisation for specific customers. In the application configuration, the Java classpath and the PYTHON_PATH have this folder precede the folders containing the normal, uncustomised application code. Because of this, any code in this special folder will override the normal, uncustomised behaviour of the application.
We would like to keep this feature in some form when moving to Rust. We certainly want to avoid distributing source code to our customers for the core app (mostly Java now) and have customers compile, which is what we would need to do if we used Rust's module feature.
Is there a way we can we implement this feature when we go to Rust?
Target OS's are a mix of Linux and Windows.
Sounds like you want some kind of plugin architecture, with a dynamic library (also written in Rust) that's loaded at runtime.
Unfortunately, Rust doesn't have a stable ABI yet, meaning that those librarise would have to be compiled with the exact same compiler that built the main application. One workaround is to expose a C ABI from the plugin side, and use C FFI to call it, if you can live with the unsafety and hassle that entails. There's also the abi_stable crate, which might be safer/simpler to use.
Scripting languages might be another avenue to explore. For example, Rhai is a language specifically developed for use in Rust applications, and interoperates as seamlessly as these things get. Of course, performance of the scripted parts will not be as great as native Rust code.
I don't think that it is possible without recompiling it or at least compiling the config.rs file that you intend to create for individual users.
Assuming that the end user does not have Rust installed on their system, a few alternatives might be:
Using .yaml files for loading configs (similar to how GitHub Actions work)
Allowing users to run custom programs (you can use tokio::process to run them in an async manner)
Using rhaiscript (I personally prefer this option)
Taken from the official language docs for the modules feature
you could set up your commercial project in such a way that the source code is threated as an external crate, and then load it into the main project with the path attribute.
A minimal example, already on the docs:
#[path = "thread_files"]
mod thread {
// Load the `local_data` module from `thread_files/tls.rs` relative to
// this source file's directory.
#[path = "tls.rs"]
mod local_data;
}
Background:
I'm currently writing unittests for a library that is capable of starting other binaries, guaranteeing the binary will die after a timeout on linux.
The unittest is currently done by calling a binary that would normally sleep for 10 seconds and then create a file containing some data. The binary should be killed before those 10 seconds meaning the file should not exist had the timeout functioned. The path to that binary is currently hardcoded which is not what I want.
What I need help with:
Problem is I want to have access to such a binary when the crate is compiled, and then pass its path to the library being tested (thus being able to call said binary using execve syscall without hardcoding its location allowing other users of my crate to compile it). This means I need to somehow have a binary generated or grabbed during compilation and somehow have access to its path inside my unittest. Is there any decent approach to doing this?
The code for the binary can be written in whatever language as long as it works. But preferably rust or C/C++. Worst case it can be precompiled but I'd like to have it compiled on the fly so it works on ARM or other architectures aswell
What have I tried:
The current method is to simply hardcode the binary path and compile it manually using G++. This is not optimal however since if anyone downloads my crate from crates.io they won't have that binary and thus cannot pass its unittests.
I have been messing around with cc in build.rs, generating C++ code and then compiling it, but CC appears to be for compiling libraries which is not what I want since it attempts to link the binaries with the library (I believe thats what it's doing), and I have been googling for a few hours without finding any approach to solve this problem.
I currently have one main applications where parts of which can be "live-patched". Eg. some functions with predefined name and signature can be updated during runtime. Currently, the repatching is performed by this two steps:
use std::process::Command to call rustc to compile a cdylib from the source. Each output file has a new name in order to make sure that dlopen does not use the cached old file
use libloading to load and run the newly patched function
This is obviously not ideal for a couple of reasons. My question would be if there is a better way to achieve this? Eg do the compilation from within rust.
Additional info:
The patch files do not require any external crates
The patch files need to be aware of some common lib modules, which do not get live patched
In order to reduce the executable size of a Rust program (called runtime in my code), I am trying to compress it and then include it in a second program (called szl) that decompresses it and executes it.
I have done that by using a Cargo build script in szl that opens the output binary from runtime, compresses it, and then generates a file that is ready for use by include_bytes!.
The issue with this approach is the dependencies are not handled properly. For example, Cargo may try to build szl before runtime (and fail), and when the source code of runtime is modified, szl is not rebuilt.
Is there a way to tell Cargo that szl depends on the binary from runtime (and transitively on the source code of runtime), or should I use another approach such as an external Makefile?
While not exactly your use case, you might get it to work with the links manifest key. It would allow you to express a dependency between the two programs and you can pass more information with DEP_FOO_KEY variables.
Before you go to such drastic measures, it might be worth it to try other known strategies for reducing rust binary size (such as calling strip, remove debug symbols, LTO, panic=abort) etc.
I'd like to use #Grape in my groovy program but my program consists of several files. The examples on the Groovy Grape page all seem to assume that your script will consist of one file. How can I do this? Should I just add it to one of the files and expect that the imports will work from the others? If so, then is it common to place all the #Grape calls in one file with no other code? Do I need to add the Grape call to all files that will import the package? Do I need to download the JAR and create a Gradle file, which I was getting away without at this point?
the grape engine and the #grab annotation were created as part of core groovy with single file scripts in mind, to allow a chunk of text to easily become a fully functional program.
for larger applications, gradle is an awesome build tool with lots of useful features.
but yes, you can manage all the application dependencies just with grape.
whether you annotate every file or a single one does not matter, just make sure the #grab annotated file is read before you try to use the external class.
annotating the main class is probably better as you will easily lose track of library versions if you have the annotations scattered.
and yes, you should consider gradle for any application with more than a dozen files or anything you might want to reuse elsewhere as a library.
In my opinion, it depends how your program is to be run...
If your program is to be run as a collection of standalone scripts, then I'd probably stick the #Grab required for each script at the top of each of them.
If your program is more of a standard style program with a single point of entry, then I'd go for using a build tool like Gradle (as you say), as you get a lot of easy wins by using it.
Firstly, it makes it easy to define your dependencies (and build a single large jar containing all of them)
Secondly, Gradle makes it really easy to start writing tests, include code coverage plugins, or useful tools like codenarc to suggest possible fixes or improvements to your code. These all become invaluable not only for improving your code (or knowing your code works), but also when refactoring your code, you know you've not broken anything that used to work.