How to get default Cargo output directory to match target architecture? - rust

I'd like the build output directory to follow the architecture I'm building on.
Currently when I use "Cargo build" without any target it puts the output in ./target/debug or ./target/release. When I build for other target architectures, it puts them into ./target/[architecture string]/debug (or release).
Seems internally rust is using rustc -vV to determine the architecture and I want to use that.
Is there a way to have it default to the current target architecture folder without hardcoding the path output directory?
The use-case here is we are building apps on multiple platforms by multiple people. And everybody's building into the same directory. We'd like it to output to the same directory as the target architecture as they're building on.

Related

Is it okay to use a single shared directory as Cargo's target directory for all projects?

Cargo has the --target-dir flag which specifies a directory to store temporary or cached build artifacts. You also can set it user-wide in the ~/.cargo/config file. I'd like to set it to single shared directory to make maintenance easier.
I saw some artifact directories are suffixed with some unique(?) hashes in the target-dir which looks safe, but the final products are not suffixed with hashes, which doesn't seem to be safe for name clashes. I'm not sure on this as I am not an expert on Cargo.
I tried setting ~/.cargo/config to
[build]
target-dir = "./.build"
My original intention was to use the project's local ./.build directory, but somehow Cargo places all build files into ~/.build directory. I got curious what would happen I put all build files from every project into a single shared build directory.
It has worked well with several different projects so far, but working for a few samples doesn't mean it's designed or guaranteed to work with every case.
In my case, I am using single shared build directory for all projects of all workspaces of a user. Not only projects in a workspace. Literally every project in every workspace of a user. As far as I know, Cargo is designed to work with a local target directory. If it is designed to work with only local directory, a shared build directory is likely to cause some issues.
Rust/Cargo 1.38.0.
Yes, this is intended to be safe.
I agree with the comments that there are probably better methods of achieving your goal. Workspaces are a simple solution for a small group of crates, and sccache is a more principled caching mechanism.
See also:
Fix running Cargo concurrently (PR #2486)
Allow specifying a custom output directory (PR #1657)
Can I prevent cargo from rebuilding libraries with every new project?

Skipping configuration of projects that do not need compiling

I'm running a long build script compiling a lot of projects, and for most builds make usually returns with make: Nothing to be done for 'all', which is good because I'm not changing that much code on every build. The script still takes a long time to finish, however, simply because it calls autotools configurescript for each project, and some other scripts.
I've read somewhere that make uses timestamps to test if it needs to build.
Is there a way to ask Make if it needs to build or not before calling the configure script?
Or maybe another way to prevent re-configuring of all projects?
Having configured each project once, you do not need to run each configure script again unless those configure scripts themselves change, or you have reason to believe that the test results will have changed. If necessary, you can run config.status instead to recreate the output files without re-running all the tests. If you can be confident that the output files do not need to be recreated, either, then you can just run make.
Moreover, if you are using Automake in conjunction with Autoconf, then your Makefiles are built with tooling that detects when configure's m4 source, normally configure.ac, has changed, therefore requiring a reconfiguration. As long as the Makefile generated by Automake-based configure remains, you can just run make, and it will do the right thing. If the Makefile is missing, then you needed to run configure or config.status anyway.
Additionally, you could consider using a shared cache file for all your configure scripts, by specifying that file to each one via the --config-cache option. That should reduce the time each configure script consumes by avoiding redundant testing.

Linux CONFIG_LOCALVERSION_AUTO: What specific folders/files must be in the source tree for this feature to function?

In my build of Linux kernel 2.6.35.14 for an embedded system, I would like to use the CONFIG_LOCALVERSION_AUTO feature. The build process only includes any modified files in the local build tree (version controlled) and sources the remainder from our Vendor's source tree (not version controlled).
What specific files/folders need to be local for the feature to function?
You can check scripts/setlocalversion which is used to get the version from the SCM (it can be git, mercurial or svn).
You can probably modify that script to your needs.

How does a bin folder that is excluded from a project affect automated builds?

We do automated builds using Nant and CruiseControl.net. I'm very green when it comes to the process. While looking into some things, I noticed that for most(all?) of the solutions involved in the automated build process, the bin folders are included in the project. Is this a requirement for automated builds? If the bin folder is excluded, will the folder and any files in it need to be copied to the deployment servers manually?
Thanks.
If you are referring to the /bin/debug/ folder under a project, you should not need those checked into your source control. If you have external libraries (log4net.dll for example) they should be checked into source control along with your code, but in a separate folder (named "ThirdParty" or "DLLs" for example.) When CruiseControl.net runs, it should compile any assemblies that have been modified, and copy output to the /bin/debug/ folder in the same way as VisualStudio copies those files on your box.
It is better to include bin folder in the automated build process, since it contains some external dlls like AjaxControlToolkit along with internal dlls.
We here excluded the Debug folder and user option files(*.suo) from the automated build.

deploying custom software on linux?

I write company internal software in PHP and C++.
What are the best methods of deploying this type of software to linux machine? Currently, we use svn export, are there any other methods?
We use checkinstall. Just write a simple Makefile that copies the files to target directories on the target machine and then run checkinstall to create RPM, DEB or TGZ package, which you can later easily install with distribution package management tools.
You can even add shell scripts that are executed before and after files are copied, so you can do some pre and post processing like adding user accounts, crontab entries, etc.
Once you get more advanced, you can add dependencies to these packages so it could also pull and install PHP, MySQL, Apache, GCC libraries and even required PHP or Apache modules or some extenal C++ libs you might need, all with a single command.
I think it depends on what you mean by deploy. Typically a deploy process for web projects involves a configuration scripting step in which you can take the same deploy package and cater it to specific servers (staging, development, production) by altering simple configuration directives.
In my experience with Linux serviers, these systems are often custom built, and in my experience often use rsync rather than svn export and/or scp alone.
A script might be executed from the command line like so:
$ deploy-site --package=app \
--platform=dev \
--title="Revsion 1.2"
Internally, the system would take whatever was in trunk for the given package from SVN (I'm sure you could adapt this really easily for git too), generate a new unique tag with the log entry "deploying Revision 1.2".
Then it would patch any configuration scripts with the appropriate changes (urls, hosts, database passwords, etc.) before rsyncing it the appropriate destination.
If there are issues with the deployment, it's as easy as running the same command again only this time using one of your auto-generated tags from an earlier deploy:
$ deploy-site --package=app \
--platform=dev \
--title="Reverting to Revision 1.1" \
--tag=20090714200154
If you have to also do a compile on the other end, you could include as part of your configuration patching a Makefile and then execute a command via ssh that would compile the recently deployed code once the rsync process completes.
There is, in my experience, a tradeoff between security and ease of deployment.
For my deployment, I've never had a problem using scp to move the files from one machine to another. You can write a simple BASH script to take a list of machines (from a text file or STDIN) and push a given directory/application to a given directory on all of the machines. Say you hypothetically did it to a bin directory, the end user would never know the difference.
The only problem with that would be when you have multiple architectures and OSes, where it has to be compiled on each one individually. In that case, you could just write a script (the first example that pops into my mind is Net::SSH from Ruby) to take that list of servers, cd to the given directory, and run the compilation script. However, if all machines use the same architecture and configuration, you can hypothetically just compile it once on the machine that you are using to distribute.

Resources