How to use the binary output of a Cargo project as the input of another one? - rust

In order to reduce the executable size of a Rust program (called runtime in my code), I am trying to compress it and then include it in a second program (called szl) that decompresses it and executes it.
I have done that by using a Cargo build script in szl that opens the output binary from runtime, compresses it, and then generates a file that is ready for use by include_bytes!.
The issue with this approach is the dependencies are not handled properly. For example, Cargo may try to build szl before runtime (and fail), and when the source code of runtime is modified, szl is not rebuilt.
Is there a way to tell Cargo that szl depends on the binary from runtime (and transitively on the source code of runtime), or should I use another approach such as an external Makefile?

While not exactly your use case, you might get it to work with the links manifest key. It would allow you to express a dependency between the two programs and you can pass more information with DEP_FOO_KEY variables.
Before you go to such drastic measures, it might be worth it to try other known strategies for reducing rust binary size (such as calling strip, remove debug symbols, LTO, panic=abort) etc.

Related

Generating an additional artifact when crate is used as a dependency

I don't know if the title is phrased very well but I'll try my best to explain what I'm trying to achieve.
I have a project consisting of two crates, main-program and patch, the goal of the program is to capture audio from other processes on the system.
patch is a library crate that compiles to a DLL with a detour to hook into the system audio functions and sends the audio data over IPC.
in main-program I have some code that does the injection, as well as to receive the data over IPC.
Currently, I just have batch script that calls cargo build for each crate, and then copies the DLL and EXE to my output folder.
Now, what I want to do, is break out the code that does the injection and the receiving of data, and together with the patch crate, I want to create a library, my-audio-capture-lib that I can publish for use by others.
The optimal result would be that someone can add my-audio-capture-lib to their cargo.toml as a dependency, specify somewhere what filename they want the DLL to have, and then call a function like my-audio-capture-lib::capture_audio_from_pid in their code to recieve audio data. And when they build their project they should get their binary, as well as the DLL from my crate.
This however requires that at some point during the build process, my-audio-capture-lib produces the necessary DLL for injection. And I don't know how to do that, or if it's even possible to do.

Need help unittesting rust library that has calls to execve

Background:
I'm currently writing unittests for a library that is capable of starting other binaries, guaranteeing the binary will die after a timeout on linux.
The unittest is currently done by calling a binary that would normally sleep for 10 seconds and then create a file containing some data. The binary should be killed before those 10 seconds meaning the file should not exist had the timeout functioned. The path to that binary is currently hardcoded which is not what I want.
What I need help with:
Problem is I want to have access to such a binary when the crate is compiled, and then pass its path to the library being tested (thus being able to call said binary using execve syscall without hardcoding its location allowing other users of my crate to compile it). This means I need to somehow have a binary generated or grabbed during compilation and somehow have access to its path inside my unittest. Is there any decent approach to doing this?
The code for the binary can be written in whatever language as long as it works. But preferably rust or C/C++. Worst case it can be precompiled but I'd like to have it compiled on the fly so it works on ARM or other architectures aswell
What have I tried:
The current method is to simply hardcode the binary path and compile it manually using G++. This is not optimal however since if anyone downloads my crate from crates.io they won't have that binary and thus cannot pass its unittests.
I have been messing around with cc in build.rs, generating C++ code and then compiling it, but CC appears to be for compiling libraries which is not what I want since it attempts to link the binaries with the library (I believe thats what it's doing), and I have been googling for a few hours without finding any approach to solve this problem.

Test for GHC compile time errors

I'm working on proto-lens#400 tweaking a Haskell code generator. In one of the tests I'd like to verify that certain API has not been built. Specifically, I want to ensure that a certain type of program will not type check successfully. I'd also have a similar program with one identifier changed which should compile, to guard against a typo breaking the test. Reading Extending and using GHC as a Library I have managed to have my test write a small file and compile it using GHC as a library.
But I need the code emitted by the test to load some other modules. Specifically the output of the code generator of that project and its runtime environment with transitive dependencies. I have at best a very rough understanding of stack and hpack, which is providing the build time system. I know I can add dependencies to some package.yaml file to make them available to individual tests, but I have no clue how to access such dependencies from the GHC session set up as part of running the test. I imagine I might find some usable data in some environment variables, but I also believe such an approach might be undocumented and prone to break without warning.
How can I have a test case use GHC as a library and have it access dependencies expressed in package.yaml? Or alternatively, can I use some construct other than a regular test case to express a file with dependencies but check that the file won't compile?
I don't know if this applies to you because there are too many details going way over my head, but one way to test for type errors is to build your test suite with -fdefer-type-errors and to catch the exception at run-time (of type TypeError).

Build-time determination of SCons targets

I have some targets that need to be built in order to determine what some of my other targets are. How do I tell SCons?
An example:
A script, generate is run on some configuration files. This script generates include path and build flags based on information in the configuration files. In order to build a SCons Object, I need to read the generated files.
I was just running Execute() on generate but it's now got lots of files to generate and it takes a good amount of time, so I only want to run it when it or a configuration file changes. How do I tell SCons to ask me at build time for some more targets once this Command has done anything it needs to do?
ok, some SCons clarifications first. Scons have two phases in doing a build. First, in the analysis phase all Scons scripts are executed and the result is a static dependency tree describing source and target files for all the builders defined in the scripts. Next, based on that tree, the build database from last build and the signatures of the files on disc, all builders with out of date targets are rebuild.
Now to your question. If you want to only run generate when necessary (when generate or configuration files changes), then running generate as a part of the analysis phase is out of the question. So don't use Execute(). Instead generate must be a builder of its own. So far so good.
Now you have two builders, the first builder generate and the second builder, I call it buildObject. buildObject depend in the targets of generate, but as you state, the generate targets are unknown at analysis time (because generate is not run, it is only set up as a builder). Having unknown targets at analysis time is a classic challenge with SCons, and there are no easy way to solve it.
I normally solve it by using what I call a SCons.pleaser file.
In your case it would be a known target that generate generates containing a high res timestamp. The buildObject builder then take this file as a source.
Now, if your configuration files has not changed, generate will not run, the SCons.pleaser will not change, and the buildObject will not run. If you change you configuration files, generate will run, the SCons.pleaser will change, and the buildObject will run as well.
Regards
The solution I went with was to make a new SConstruct that knows how to do the generate phase, and Execute() it early in my SConscripts before I get to the bits where its output is needed. It works well, since it just builds things as necessary with the small fixed overhead of invoking SCons from within SCons.

finding all dependencies in a verilog compile

I'm trying to cheaply and accurately predict all the SystemVerilog dependencies for a build flow. It is ok to over-predict the dependencies and find a few Verilog files that aren't sv dependencies, but I don't want to miss any dependencies.
Do I actually have to parse the Verilog in order to determine all its dependencies? There are tick-include preprocessor macros, but those tick-include don't seem to load all the code currently getting compiled. There is a SYSTEM\_VERILOG\_PATH environment variable. Do I need to parse every SystemVerilog file in that SYSTEM\_VERILOG\_PATH variable in order to determine which modules are defined in which files?
One good way (if this is synthesizable code) is to use your synthesis tool file list (e.g. .qsf for Altera). That tends to be complete, but if it isn't, you can look at the build log for missing files that it found.
From a readily compiled environment it is possible to dump the source files
(e.g. Cadence
-- To list source files used by the snapshot 'worklib.top:snap'
% ncls -source -snapshot worklib.top:snap
)
but if you are starting from scratch I am afraid there is no easy solution. I would go for the pragmatic one: have a config file with all the directories that contain .sv files and then compile everything in it. If your project has a proper file structure, you could also modularize this by supplying config files for every major block.
Hope that helps.
I know Questa has a command line option where it will generate a makefile for you with all the dependencies in it after you have compiled your design. I'm not sure if the other simulators have that.
Another option is to browse and dump your compiled library in your simulator. You probably won't get the actual filenames the modules are compiled from, but it'll be a lot easier to parse all your verilog files for the module names that show up in the compiled library.

Resources