It is possible to fuzz binary applications with the cargo-fuzz? From following the Rust tutorial about fuzzing (https://rust-fuzz.github.io/book/introduction.html), I understand that I can fuzz functions from libraries, but I don't not see how the cargo-fuzz can be used for fuzzing binary applications.
Related
Introduction
I created a small project that uploads C++ code to an Attiny85, for this I used arduino.
Question
But I would have liked to know if it was possible to download and run rust code in Attiny85 or other Attiny.
If we can, how do we do it?
Details
I found this GitHub repo to do this, but it is not explicit on how can export the rust code to Attiny.
The GitHub repo in question: https://github.com/q231950/avr-attiny85-rust?ref=https://githubhelp.com
C++ is cross-compiled to AVR machine code on your development host. What you are loading is not C++ code; that is the source code used to generate the machine executable binary code, which is what you load..
You can develop for AVR using any language for which a cross compiler exists. Rust is certainly such a language. This article discusses using Rust on Arduino Uno hardware.
Whether ATTiny85 with only 8Kb of Flash and 512 bytes of SRAM will support a Rust runtime environment and any useful code I cannot tell; I am not familiar with Rust's runtime requirements, but it does not seem like an efficient use of limited resources to me, and I would treat it as an academic challenge rather than a practical development approach. I would expect Rust to have a considerably larger run-time footprint than C or even C++.
I am new to Rust and so far I was amazed by its design. But I encountered something that makes me be scared to use it in commercial projects. The size of the executable binary file of a "Hello world" application is 3.2Mb.
-rwxr-xr-x 2 kos kos 3,2M Jul 10 15:44 experiment_app_size
That's huge!
The version of rustc is 1.53.0
The toolchain is stable-x86_64-unknown-linux-gnu.
Target is release.
I am wondering is it planned to fix the problem in the future?
Is there a technique I can use to decrease the size of the executable binary file?
Is the same problem relevant to WASM toolchain?
By default, Rust optimizes for execution speed rather than binary size, since for the vast majority of applications this is ideal. But for situations where a developer wants to optimize for binary size instead, Rust provides mechanisms to accomplish this.
Build in Release Mode
Strip Symbols from Binary
Optimize For Size
Enable Link Time Optimization
Reduce Parallel Code Generation Units to Increase Optimization
Abort on Panic
Remove panic String Formatting with panic_immediate_abort
Remove core::fmt with #![no_main] and Careful Usage of libstd
Removing libstd with #![no_std]
Compress the binary
Most techniques described above are applicable to both native and WASM toolchains. Following that guide, it is possible to get a "hello world" binary around 93k.
And here is a specialized extensive article on how to optimize the size of binary of Rust WASM build.
And here is a deep discussion on the official Rust forum of the pros and cons of options developer has to optimize binary by size.
Note not "functional dependency". Are there tools available that allow me to build a static function dependency graph from source code? Something which indicates to me which functions depend on which other ones in a graphical manner.
Yes, there certainly are. If you look in the Development category on Hackage, you'll find tools for:
graphing package dependencies -- n.b requres older cabal
graphing module dependencies
graphing function calls
graphing running data structures
In particular, SourceGraph contains many analysis passes, including:
visualizing function calls
computing cyclomatic complexity
visualizing module imports
Other tools that you might be interested in are:
HPC, for visualizing test coverage
ThreadScope, for visualizing runtime behavior
lscabal, extract modules from a package
Here is the functional call graph produced by SourceGraph run over cabal2arch:
I am working with OpenCV, an open source image processing library, and due to complexities in my algorithm I need to use multiple threads for video processing.
How multi-threading is done on C++ 98? I know about C++ 11 has a built in support library for threading (std::thread) but my platform (MSVC++ 2010) does not have that. Also I read about Boost library, which is a general purpose extension to C++ STL, has methods for multi-threading. I also know with MSDN support (windows.h) I can create and manage threads for Windows applications. Finally, I found out that Qt library, a cross platform GUI solution, has support for threading.
Is there a naive way (not having any 3rd party libraries) to create a cross-platform multi-threading application?
C++98 does not have any support for threading, neither in the language nor the standard library. You need to use a third party library and you have already listed a number of the main candidates.
OpenCV relies on different external systems for multithreading (or more accurately parallel processing).
Possible options are:
OpenMP (handled at the compiler level);
Intel's TBB (external library);
libdispatch (on systems that support it, like MacOS, iOS, *BSD);
GPGPU approaches with CUDA and OpenCL.
In recent versions of OpenCV these systems are "hidden" behind a parallel_for construct.
All this applies to parallel processing, i.e., data parallel tasks (roughly speaking, process each pixel or row of the input in parallel). If you need application level multithreading (like for example having a master thread and workers) then you need to use frameworks such as POSIX's threads or Qt.
I recommend boost::thread which is (mostly) compatible with std::thread in C++11. It is cross-platform and very mature.
OpenCV's parallelism is internal and does not directly mix with your code, but it may use more resources and cores than you might expect (as a feature), but this might be at the expense of other external processes.
I have had good experiences with Apple's vDSP primitives under OS X and iOS.
http://developer.apple.com/library/mac/#documentation/Accelerate/Reference/vDSPRef/Reference/reference.html
Now I am trying to port some code that relies on vDSP to Linux and I wonder if any equivalents are available built into one of the standard libraries.
While there is not presently any library that matches vDSP, there are several alternatives you might explore. A couple off the top of my head:
OpenCV is an impressive collection of image processing and computer vision
routines with a vibrant user and research community.
Eigen is a C++11 template library for linear algebra:
matrices, vectors, numerical solvers, and related algorithms.
My personal recommendation would be Eigen.