Can we download something & set environment variable during crate installation? - rust

As a Rust driver crate developer, I would like to perform below steps during my crate installation/download when used by any other Rust program:
Check the platform i.e. Windows or UNIX or macOS.
Download the corresponding platform-specific binary from an external website.
Set an environment variable pointing to the download location.
I know this is possible in Node or Python or R but not sure if this is possible in Rust.

You can use Build script to achieve that (but it is not what you should do, please see note below).
The script will be compiled and executed before cargo start building your library.
Inside the script you can use cfg attribute to check platform.
There is a bunch of libraries to download something via HTTP, for example reqwest
You can set environment variable via cargo:rustc-env=VAR=VALUE
IMPORTANT NOTE
Most of Rust users doesn't expect that kind of behavior from build script. There may be dozen of problems with the approach. Just few of them from the top of my head:
First of all there may be security issues.
The approach will break builds at client side.
I believe it's better to upload all binaries you need as a part of the crate. You can use include_bytes! for that.

Related

How do I build Nim library packages

I've created a nimble library package as per the documentation. When I try to build it using nimble build I get the following error.
Error: Nothing to build. Did you specify a module to build using the bin key in your .nimble file?
I can do this and it does fix the error but according to the documentation adding the bin key to the .nimble file turns my package into a binary package.
Other things I have tried:
Use nimble install: This does not appear to verify that my code will actually compile and will happily install anything to the local package directory (I added a C# class to my .nim file, for example, and it was successfully installed).
Use nimble c: This works but I have to pass in the path to the nim file I want to compile and the binDir entry in the .nimble file is ignored resulting in the output being placed in the same directory as the file being built. This complicates the development cycle because I have to manually clean up after the compiler.
Use the compiler directly. This is pretty much the same as the previous option with the same flaws.
I guess I could also create a separate .nim file and import my library after it is installed but this is a big overhead for just wanting to verify that a package in the early stages of development will actually compile.
I just want to be able to verify that the source code in my library package is syntactically correct and will compile. How is this meant to be done for library packages?
From your provided link to the nimble package manager documentation I have the feeling that
https://github.com/nim-lang/nimble#tests
is what you are looking for. But I have never used the test command, so I am not sure. I do my test manually still, I read the nimble docs maybe 4 years ago and can not really remember. And currently there is much package manager related work going on, I heard there is a new, alternative package manager called nimph, and from a forum thread I think I read something that nimble is going to change and improve also. Maybe you should consider subscribing to the Nim forum, that is the place where the bright Nim devs are. Well, at least a few of them.

Build a linux static library

For a website of mine I'm trying to make wkhtmltopdf(Link) work. The website is hosted on a shared hosting, which are a bit troublesome when using libraries not installed.
After a few tries with multiple version of the library(some where supposed to be static but I still got error about shared library not being found) I ended up contacting the provider, who told me that it would work if I have a static version of the library.
Problem is, my linux knowledge is very limited.
If I understand correctly, a static library would be a version of wkhtmltopdf, one single file, including all dependencies ?
As the official site mention are the followings : zlib, fontconfig, freetype, X11 libs (libX11, libXext, libXrender)
Second question is, could you point me to where I could find a step by step guide to build such library ? as my research are unsuccessful so far..
my linux knowledge is very limited
I will assume you are more or less familiar with windows dlls, which are similar to linux .sos (shared objects).
A shared object can be shared (hence the name) between different programs. In most cases, when the executable is loaded, the library is loaded in memory too. You can see such dependencies with ldd.
A static library (or statically linked library, or static executable, or whatever) is a library that is embedded in the executable at compile time. To statically link your library, you need to rebuild your executable, and link with a .a static library file, which is similar to .lib files on windows (with the visual studio compiler, at least, IIRC).
This can be troublesome and time consuming. That's why I advise you to take another route:
On windows, .dll files that share the same folder as the executable are given a higher preference than the one on the path (IIRC). On Linux (and generally UNIX), this is regarded as a security flaw, as someone could easily drop a rogue .so file and alter the program's behavior. You can however control this behavior with two environment variables: LD_LIBRARY_PATH and LD_PRELOAD. The second one is a bit more powerful, and is just some kind of "dll" injection. The first one, however, controls the path in which .so files will be searched.
So, I advise you to look for the required dependencies with ldd, and do it once again on your server if you can. Look for every missing .so file. You could do so by issuing the command ldd wkhtmltopdf | grep not found.
Once you have this list of missing libraries, bundle them together and send them on your server (be aware that they can have some dependencies too). You can probably find them on a local Linux installation of matching architecture, but I encourage you to try to match the distribution with the one of your provider.
Then, issue the wkhtmltopdf call after setting the LD_LIBRARY_PATH environment variable. You can do it like so:
LD_LIBRARY_PATH='/home/me/my_libs':$LD_LIBRARY_PATH /home/me/programs/wkhtmltopdf
Note that I append the old LD_LIBRARY_PATH variable at the end. It is rarely set out of the box, but at least you shouldn't have any problem if you do it this way.
To answer your comment: it is indeed a bit like modifying the PATH on windows (just to make this clear once again: on Linux, you have the same PATH environment variable, but it only works for executables' search path; so we're changing another LD_LIBRARY_PATH environment variable to specify the libraries search path).
Please note that in the above example, I didn't change it system-wide, but only for calling wkhtmltopdf. On windows, there are multiple ways to change the PATH environment variable. You can open the dedicated gui, which will change the path variable in the registry. But you can also override it locally in a command prompt or batch script. This is exactly what I did here.
Once LD_LIBRARY_PATH is exported, it will be used for every program you call, so it might be dangerous to set it system wide, if you have some incompatibilities. Moreover, whatever you try, you won't be able to set it system-wide if you don't have root access. So, you will at most affect only your programs.
As a final note, you might pull a lot of dependencies with this project, since it is Qt-based. If you want to rebuild it statically, you have to build Qt first with -static. Next time, you might be interested in some containerization technology, such as docker (or even appimages/flatpack/snap), which is designed to work around this kind of problems.
For further reading on dynamic link libraries on Linux, you might be interested in this resource or similar.

How to handle 3rd-party static C library dependencies in Rust/Cargo?

There is a 3rd party C library that I'd like to link to in my Rust project. It is hosted on github and compiles only as a static library. Is there any way to have Cargo fetch this dependency for me? I'm thinking there isn't. I tried adding it as a dependency and got a "Could not find Cargo.toml in ..." error.
As an alternative, I thought of modifying my build.rs file to use the git2-rs crate to download a tag of the library, possibly specified as a tag name passed through an environment variable.
Another option would be to include the source of the C library in my project, but I was thinking if the users of my crate want to use a different (but compatible) version of the 3rd party library with my crate, they wouldn't be able to do so as easily.
So how are others in the community handling situations like this?
In general, you want to create a libfoo-sys crate. That crate will have a build script that compiles the native library and sets up the linker options.
The build script can use build-time dependencies like the cc crate to make the process of downloading and compiling the native library easier.
You can use environment variables or features to choose where the native library comes from. You could use one already installed by the user by their system package manager (or perhaps a hand-compiled version), you could download the source from somewhere, you could include the code in the repository, or you could use a git submodule to reference another git repository instead of actually copying code.
In many cases, you will also use a tool like rust-bindgen to create the "raw" Rust bindings for the C library.

When using someone else's application code do I need to run Cmake to get the project structure for my operating system.

I am getting into a position where I have to use other people code for projects, for example openTLD. I want to change some of the code to give it more functionality and use it in a diffrent way. What I have found is that many people have packaged their files in such a way that you are supposed to use
cmake
and then
make
and sometimes after that
make install
I don't want to install the software on my system. What I am looking to do is get these peoples code to a point where I can add to it in Eclipse or even just using Nano and then compile it.
At what point is the code in a workable/usable state. Can I use it after doing cmake or do I need to also call make? Is my thinking correct that it would be better to edit the code after calling cmake as opposed to before? I am not going to want my finished code to be cross platform supported, it will only be on Linux. Is it easer to learn cmake and edit the code befor running cmake as opposed to not learning cmake and using the code afterwards, if that is possible?
You question is a little open ended.
Looking at the opentld project, there is a binary and a library available for use. If you are interested in using the binary in your code, you need to download the executables(Linux executables are not posted). If you are planning to use the library, you have two options. Either you use the pre-built library or build it during your build process. You would include the header files in your custom application and link with the library.
If you add more details, probably others can pitch in with new answers or refine the older ones.

Is there a way to compile node.js source files? [duplicate]

This question already has answers here:
Is it possible to create desktop applications with node.js? [duplicate]
(5 answers)
Closed 7 years ago.
Is there a way to compile a node.js application?
I maybe very late but you can use "nexe" module that compile nodejs + your script in one executable: https://github.com/crcn/nexe
EDIT 2021: Nexe's latest release is from 2017 and it appears that development has otherwise slowed, so the more-widely-used alternative from Vercel should also be considered these days: pkg
Node.js runs on top of the V8 Javascript engine, which itself optimizes performance by compiling javascript code into native code... so no reason really for compiling then, is there?
https://developers.google.com/v8/design#mach_code
EncloseJS.
You get a fully functional binary without sources.
Native modules also supported. (must be placed in the same folder)
JavaScript code is transformed into native code at compile-time using V8 internal compiler. Hence, your sources are not required to execute the binary, and they are not packaged.
Perfectly optimized native code can be generated only at run-time based on the client's machine. Without that info EncloseJS can generate only "unoptimized" code. It runs about 2x slower than NodeJS.
Also, node.js runtime code is put inside the executable (along with your code) to support node API for your application at run-time.
Use cases:
Make a commercial version of your application without sources.
Make a demo/evaluation/trial version of your app without sources.
Make some kind of self-extracting archive or installer.
Make a closed source GUI application using node-thrust.
No need to install node and npm to deploy the compiled application.
No need to download hundreds of files via npm install to deploy your application. Deploy it as a single independent file.
Put your assets inside the executable to make it even more portable.
Test your app against new node version without installing it.
There was an answer here: Secure distribution of NodeJS applications. Raynos said: V8 allows you to pre-compile JavaScript.
You can use the Closure compiler to compile your javascript.
You can also use CoffeeScript to compile your coffeescript to javascript.
What do you want to achieve with compiling?
The task of compiling arbitrary non-blocking JavaScript down to say, C sounds very daunting.
There really isn't that much speed to be gained by compiling to C or ASM. If you want speed gain offload computation to a C program through a sub process.
Now this may include more than you need (and may not even work for command line applications in a non-graphical environment, I don't know), but there is nw.js.
It's Blink (i.e. Chromium/Webkit) + io.js (i.e. Node.js).
You can use node-webkit-builder to build native executable binaries for Linux, OS X and Windows.
If you want a GUI, that's a huge plus. You can build one with web technologies.
If you don't, specify "node-main" in the package.json (and probably "window": {"show": false} although maybe it works to just have a node-main and not a main)
I haven't tried to use it in exactly this way, just throwing it out there as a possibility. I can say it's certainly not an ideal solution for non-graphical Node.js applications.
javascript does not not have a compiler like for example Java/C(You can compare it more to languages like PHP for example). If you want to write compiled code you should read the section about addons and learn C. Although this is rather complex and I don't think you need to do this but instead just write javascript.

Resources